
Review of “Artificial Intelligence: A General Survey” (1993) - deepaksurti
http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html
======
schoen
Here's the paper that this is a review of:

[http://www.chilton-
computing.org.uk/inf/literature/reports/l...](http://www.chilton-
computing.org.uk/inf/literature/reports/lighthill_report/p001.htm)

------
bo1024
Wow:

> 1\. Much work in AI has the ``look ma, no hands'' disease. Someone programs
> a computer to do something no computer has done before and writes a paper
> pointing out that the computer did it. The paper is not directed to the
> identification and study of intellectual mechanisms and often contains no
> coherent account of how the program works at all.

(It seems this was written in about 1973, published online in 1993. I infer
this because the article being reviewed appeared in 1973, and this one speaks
of 1978 as being in the future).

~~~
blt
Also:

> _It would be a great relief to the rest of the workers in AI if the
> inventors of new general formalisms would express their hopes in a more
> guarded form than has sometimes been the case._

------
AndrewOMartin
This is a review of the Lighthill report 1973, commissioned by the UK govt. to
evaluate investment into AI research.

It split AI research into the categories of; "Advanced Automation", (category
A), for any work whose goal was to produce useful behaviour which would be
considered intelligent if seen performed by a human; "Computer-based [Central
Nervous System] research", (category C), for any work that wanted to further
the understanding of the natural central nervous system through study upon
models of artificial neurons. A third category, "Building Robots",(category B)
was placed theoretically between the other two categories. Category B
contained all work which aimed to make progress towards the goal of the
creation of an artefact that exhibited behaviour from category A through
methods developed by work in category C, it was therefore also referred to as
the "Bridge category".

The tone of the Lighthill report was more cautious than much related
literature, often referring directly to admissions of naivety and subsequent
disappointment from distinguished researchers. Categories A and C were
therefore hailed as promising endeavours within the contexts that individual
projects in category A remained within a tightly defined context, and work in
category C remained tightly aligned to merely computationally supporting work
that would have otherwise been performed in psychology and neurobiology. The
tone was largely critical, though not unfair, of category B which was unable
to unite categories A and C into a single research field by citing successful
projects that drew heavily on both categories.

The central critique of category B was that no projects existed that were
both, (i) sufficiently related to the other two categories and, (ii)
successful. This critique can be interpreted as an example of the "No True
Scotsman" informal fallacy in that any successful project suggested as a
counterexample could be immediately claimed as being not sufficiently in
either category A or C, and any project that drew significantly from both
categories A and C when suggested as a counterexample could be claimed to be
insufficiently successful.

This can be seen in the report as any projects that are inarguably successful,
are inevitably described as using "general computational theory" which, given
that the roots of computational theory lay in Turing's work in decidability
and intelligence is not obviously distinct from category A research.
Similarly, Winograd's SHRDLU project which is admitted to draw deeply on work
from both other categories was claimed to be essentially not successful enough
as "suggestions for possible developments in other areas that can properly be
inferred from the studies are rather discouraging". Lighthill also noted that
"one swallow does not make a summer", though this was a "banal" observation by
his own admission.

The Lighthill report had a significantly negative effect on AI research
funding, though it was not entirely unfair. It was broadly compatible with the
view of Dreyfus and potentially saved the reputation of AI from a more
disastrous period which may have occurred in the future had overconfidence and
overpromising been allowed to continue unchecked.

~~~
AndrewOMartin
Anyone interested in the History of AI can also watch the report being
presented by Lighthill, and subsequently debated by John McCarthy, Donald
Michie, and Richard Gregory.

[https://www.youtube.com/watch?v=03p2CADwGF8](https://www.youtube.com/watch?v=03p2CADwGF8)

------
hprotagonist
Totally not germane to the content, but it's amusing how i can _know_ that
this was originally written in LaTeX by the quotation marks alone.

Inspecting page source confirms it.

~~~
schoen
The page source confirms that it was converted from LaTeX, but TeX itself was
only released in 1978 and a sibling comment to yours points out that this
article refers to 1978 as the future:

> Well, if programs can't do better than that by 1978, I shall lose a L250 bet
> [...]

(presumably £250)

This makes the original format of the document more of a mystery.

~~~
gattilorenz
There's a chance it has been written in the "Stanford Artificial Intelligence
text editor E (written by Dan Swinehart)", and later converted to TeX:
[http://www-formal.stanford.edu/jmc/oldnotes.html](http://www-
formal.stanford.edu/jmc/oldnotes.html)

[https://en.wikipedia.org/wiki/E_(text_editor)](https://en.wikipedia.org/wiki/E_\(text_editor\))

