
Recommended Readings in AI - a list by Russell and Norvig - fogus
http://aima.cs.berkeley.edu/books.html
======
Dn_Ab
Ever since Euclid listed (collected?) his axioms the march towards AI became
inevitable. People don't realize how much AI "failures" have contriubted to
computing, programming and society in general. Some examples include lisp,
functional programming, garbage collection, object oriented programming and
Walmart.

~~~
samfoo
Forgive my ignorance, I'm actually curious. How did AI failures contribute to
FP, garbage collection, and OO?

~~~
schwabacher
And Walmart?

~~~
wmf
AFAIK operations research and supply chain management use techniques
originating from AI.

~~~
onan_barbarian
AFAWK (As Far As Wikipedia Knows), operations research as a formal area kicked
off in 1937 and had 1000 people working on it in Britain during World War 2.
It may in fact be true that some techniques from AI have cross-fertilized but
the credibility of the claim that "we owe operations research and supply chain
management to AI" is very low.

What's next: Minsky traveled back in time in a LISP Machine and impregnated
the Rev. Bayes' mother?

I'm willing to give 'em Prolog (take it, please!) but the confluence of FP and
AI might owe more to the fact that AI (especially 'strong AI') was considered
a Big Deal early on in computing and as such, ideas that were invented/used
contemporaneously tend to be associated with AI.

------
dvse
I never understood the popularity of their AI text - the discussion of topics
other than the most basic search methods is uniformly obtuse and the authors
hardly ever make any of the important connections with literature outside
their "field", e.g. between reinforcement learning and classical control (see
for example Russ Tedrake's notes on OCW [1])

1\. [http://ocw.mit.edu/courses/electrical-engineering-and-
comput...](http://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-832-underactuated-robotics-spring-2009/).

~~~
kenjackson
Do you have a better AI text? I recently (years ago) tried to teach from
Artificial Intelligence: A New Synthesis ([http://www.amazon.com/Artificial-
Intelligence-Synthesis-Nils...](http://www.amazon.com/Artificial-Intelligence-
Synthesis-Nils-Nilsson/dp/1558604677)), but found it seriously lacking.

For practicing programmers I've found books like "AI for Game Programmers" to
be surprisingly good. Although clearly not the breadth or theoretical basis of
an actual text.

What would be nice are video lectures of a class that follows this text, ala
the SICP lectures from MIT.

~~~
dvse
Certainly nothing remotely comparable to SICP exists. From my point of view
people who are interested in these topics are much better off getting some
intuition for basic problems in scientific computing and then making the
requisite connections themselves. I can wholeheartedly recommend the recent
book by Gilbert Strang "Computational Science and Engineering" [1] and
companion video lectures for MIT courses 18.085 and 18.086.

1\. <http://www-math.mit.edu/cse/>

~~~
kenjackson
Thanks for the Strang link. For some reason never noticed this text and
lecture videos. He's definitely a treasure in the realm of math education.

------
SeanLuke
Sadly, it's three editions and AIMA is still hopeless regarding stochastic
optimization (genetic algorithms, simulated annealing, ant colony
optimization, hill climbing, and the like), and gradient-based optimization.

It seems they don't want to be bothered about the significant difference
between search and optimization. So they stick all the stochastic optimization
stuff into a section called "local search and optimization", and place it
underneath the Search chapter (it's not search, and almost none of it is
local). And then separate out optimization methods like gradient descent etc.,
placing them under "local search in continuous spaces", as if (1) they were
search and (2) stochastic optimization wasn't applied to continuous spaces.

And if this wasn't muddled enough, their recommended books for stochastic
optimization aren't under the Search chapter at all -- they've been placed
under the Machine Learning chapter. And it's a strange collection.

I'm pretty disappointed with AIMA's seemingly poor understanding of this area.
Well, I guess at least it's better than their cursory treatment of multiagent
systems.

~~~
bpodgursky
How is optimization not a form of search? It's a search through the solution
space which tries to find a global minima or maxima.

Also, of the techniques you list, simulated annealing, hill climbing and
gradient-based optimization are all local search methods (genetic algorithms
may or may not be, depending on your emphasis on mutation vs recombination.)

~~~
Dn_Ab
I agree with you, most of those methods are local and optimization is a form
of search typically used for continuous and usually differentiable spaces. One
can also say search is a form of optimization.

But Based on my readings genetic algorithms are more known to be susceptible
to local minima, while simulated annealing in theory is a global method.

\---

As for genetic algorithms - for optimization I prefer Differential Evolution.
For Exploration maybe genetic algorithms are more fruitfull although I think
Genetic programming and Learning Classifier Systems be slept on. Speaking of
slept on - another under appreciated but much more used method is stochastic
gradient descent. Global optimums might be overrated.

~~~
SeanLuke
> _But Based on my readings genetic algorithms are more known to be
> susceptible to local minima, while simulated annealing in theory is a global
> method._

Both simulated annealing and genetic algorithms are global methods, and both
can get caught in local minima. The same goes for pretty much every other
global stochastic optimization method out there.

Simulated annealing got this "special" reputation among engineers back in the
'80s because the Metropolis algorithm had a formal proof of global guarantees
-- run it long enough and it's guaranteed to find the global optimum. But
nowadays most every algorithm in this category has similar guarantees. It's
relatively easy to make such a guarantee as it turns out.

I like differential evolution too, though it's pretty exploitative.

------
swannodette
This is a goldmine. Thanks for posting this. Interesting to see how much
Prolog and Lisp texts dominate the programming section :)

~~~
daviddavis
I came in here to say the same thing. I've been learning Lisp and some other
functional languages but I'm still not sure why Lisp and Prolog make such good
AI languages.

~~~
ludwigvan
Isn't it somewhat historic? Lisp was invented by McCarthy, who also coined the
term AI. It was perhaps the language AI researchers were acquainted by was
Lisp, and it remained a tradition. (There are technical reasons too, but I'm
not that qualified to list them. Here are some reasons:
[http://stackoverflow.com/questions/130475/why-is-lisp-
used-f...](http://stackoverflow.com/questions/130475/why-is-lisp-used-for-ai)
)

~~~
cbo
Partially historic, yes. It was faster/easier for students to write their AI
programs in lisp than in any other languages for a long time. Between
functional programming, an REPL, and macros, you could find yourself doing a
lot with a little.

Prolog is also partially historic, but it has the added benefit of being
logic-based, which is the direction that AI focused on for several years.
Around that time, it was believed that AI could be done with pure symbolic
logic, and that's exactly how programming in Prolog works. This approach
eventually turned out not to work very well, but Prolog is still used in some
places because it's a very easy language for interacting with graphs and
decision trees (which are big things in AI).

~~~
eel
Lisp and Prolog are still huge in AI, at least in academia, or at least at my
university. For instance: ICARUS [1], CCalc [2], and answer set programming
solvers [3], all of which are part of active and recent research, use Lisp or
Prolog.

Prolog was and still is used precisely because it is so (relatively) easy to
specify some facts and behaviors as Horn clauses [4], which is important,
because it is one of the few places I ever hear the phrase "solvable in
polynomial time" in KRR.

[1] <http://circas.asu.edu/cogsys/papers/manual.pdf>

[2] <http://www.cs.utexas.edu/users/tag/cc/>

[3] <http://potassco.sourceforge.net/>

[4] <http://en.wikipedia.org/wiki/Horn_clause>

------
Killah911
Any AI book list that includes "On Intelligence" as part of the reading list
is good with me...

------
gbrindisi
For your downloading needs: <http://gen.lib.rus.ec/>

~~~
dvse
I'm not sure it is wise to post links to the site on open forums.

