Hacker News new | past | comments | ask | show | jobs | submit login
The Lighthill Debate on AI from 1973: An Introduction and Transcript (github.com/dicklesworthstone)
64 points by eigenvalue 3 months ago | hide | past | favorite | 27 comments



Interesting excerpt on the origin of the term, "artificial intelligence":

    Professor Sir James Lighthill: [...] Now, what are the arguments for not calling this computer science, as I did in my talk and in my report, and calling it artificial intelligence? It's because one wants to make some sort of analogy. One wants to bring in what one can gain by a study of how the brains of living creatures operate. This is the only possible reason for calling it artificial intelligence instead.

    Professor John McCarthy: Let's see. Excuse me. I invented the term artificial intelligence. I invented it because we had to do something when we were trying to get money for a summer study in 1956, and I had a previous bad experience. The previous bad experience concerns occurred in 1952, when Claude Shannon and I decided to collect a batch of studies, which we hoped would contribute to launching this field. And Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice, and so we agreed to call it automata studies. I was terribly disappointed when the papers we received were about automata, and very few of them had anything to do with the goal that at least I was interested in. I decided not to fly any false flags anymore, but to say that this is a study aimed at the long-term goal of achieving human-level intelligence. Since that time, many people have quarreled with the term, but have ended up using it. Newell and Simon, the group at Carnegie Mellon University, tried to use complex information processing, which is certainly a very neutral term, but the trouble was that it didn't identify their field, because everyone would say, well, my information is complex. I don't see what's special about you.


I think that David Marr really nailed things down though.

https://dspace.mit.edu/bitstream/handle/1721.1/5776/AIM-355....


This isn't special, the same thing happened to the inventor of dynamic programming (and large swathes of control theory and reinforcement learning) Richard Bellman.


Thank you, that’s gold


When dealing with humans, marketing is everything.


I had seen the video of this debate years ago, but decided to revisit it recently in light of all the new developments in the field. I thought others might enjoy it too, especially people who had never heard of it before, but that many would prefer to read it instead of watching. So I created a full transcript with proper formatting.

I also included some thoughts in the intro and a section at the end that attempts to review the accuracy of the different speakers' arguments and claims since the debate took place 50 years ago. Hopefully it can spark an interesting debate here on the current state of the field and we can learn some lessons from the past!


Right on the money, I'd never heard of it before and I prefer the transcript to watching the video. Thanks a lot!


From the perspective of the 1970s many of these problems would have appeared insanely hard to solve to the point of impossibility.

Consider the idea of building insane torch rockets like those in The Expanse or Avatar. We’d need something like small compact fusion reactors or antimatter manufacturing at scale, not to mention enormous advances in materials and superconductors and such.

That looks impossible today and we know the shape of the problem. In 1973 the gap between computers of the time and those of today was similar to the gap between a chemical rocket and a relativistic antimatter blowtorch, but on top of that nobody really knew what approaches to AI might even bear fruit. We had way more unknown unknowns between us and HAL 9000 than we have between us and a starship.

It took many doubling of compute power, the accumulation of petabytes of training data, and thousands and thousands of researchers not just exploring the math but also tinkering (“graduate student descent” as it’s known in machine learning).

Definitely forgivable to think this might not be achievable in 1973.


I think in scale only. We are still using the same computer architectures, operating systems, and base networking protocols available in 1973. The scale of clock speeds and storage amounts would be astonishing to someone back then. I also believe that if you told them that you'd go to a terminal and type "ls" to list the contents of a directory they would be astonished that that hadn't changed in 50 year.


You are totally forgetting about most important part, GPUs. They didn't exist back in 1973



I have an "encyclopedia of cybernetics" for kids books from the 1960s-1970s. Among other things it has two chapters on artificial intelligence (mentioning neural networks), and another on Moore's Law, trying to extrapolate computing power up to IIRC our time. IIRC their predictions/speculations turned to be pretty accurate, except for the photonic and DNA-based computers...


Sounds like an awesome book! I wish more kids books were sophisticated like that.


James Lighthill is sometimes criticized for commenting on a field in which he was not expert, so it adds context to look at why he was solicited for comment. At that time, fluid dynamics was one of the most prestigious technical fields in British academia, being very difficult, mathematical, and successful. Lighthill was the superstar of the field, so he was an obvious choice when the Science Research Council went looking for someone with status and technical chops to give a disinterested analysis of AI research. It is rather like how Richard Feynman was drafted to investigate the Challenger disaster.

TFA says ...Lighthill was no fool. And yet, he was very confidently and persuasively wrong about the potential for AI... I disagree, his report (which is short and readable) raises issues that are still pertinent. We currently have debate over whether GPT represents true intelligence and Lighthill's comments foreshadow those of current skeptics.

The author of TFA is clearly not a skeptic and accepts the views of John McCarthy. I have studied the debate between McCarthy and Hubert Dreyfus over the potential of AI, and I personally think that Dreyfus was right, current hype over ChatGPT notwuthstanding.


Related:

Review of “Artificial Intelligence: A General Survey” (1993) - https://news.ycombinator.com/item?id=21700906 - Dec 2019 (9 comments)

John McCarthy (and others) vs. Lighthill on AI in 1973 - https://news.ycombinator.com/item?id=856843 - Oct 2009 (1 comment)


I mean, he was right … for what we knew at that time. He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain that the hardware of the time was very far away from achieving. He could not predict that things would move so fast on the hardware side (nobody could have)that made this somewhat possible. We are atill I would argue a bit out in having the appropriate computer power to make this a reality still, but it now is much more obvious that it is possible if we continue on this path


> He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain

Besides naming neutral networks and human brains don't have that much in common


Most of the relevant similarities are there. Every plausible model in computational neuroscience is based on neural nets or a close approximation thereof, everything else is either magic or a complete non-starter.


> nobody could have

Hans Moravec at McCarthy's lab in roughly this timeframe (the 70s) wrote about this then -- you can find the seed of his 80s/90s books in text files in the SAIL archive https://saildart.org/HPM (I'm not going to look for them again). Easier to find: https://web.archive.org/web/20060615031852/http://transhuman...

(Same McCarthy as in this debate.)

Gordon Moore made up Moore's Law in 1965 and reaffirmed it in 1975.


ok - except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

secondly, people selling things and people banding together behind one-way mirrors have a lot of incentive to devolve into smoke-and-mirrors.

Predicting is a social grandstand in a way, as well as insight. Lots of ordinary research has insight without grandstanding.. so this is a media item as much as it is real investigation IMHO


To be honest restricting funding to the kind of symbolic based AI research that is criticized in this discussion might have helped AI more than it hurt , by eventually pivoting the research toward neural networks and backpropagation. I don’t know how much of a good thing would have been if this kind of research continued to be funded fully.


Symbolic AI still alive and kicking, https://arxiv.org/abs/2402.00854 also liking the experiments around Graph Neural Networks and hybrids thereof.


>except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

I mean, this is what evolution does too. The variants that 'looked right' but were not fit to survive got weeded out. The variants that were wrong but didn't negatively affect fitness to the point of non-reproduction stayed around. Looking right and being right are not significantly different in this case.


yes, you have made the point that I argue against above. I claim that "looking right" and "being right" are absolutely and fundamentally different at the core. At the same time, acknowledge that from a tool-use, utilitarian, automation point of view, or a sales point of view, results that "look right" can be applied for real value in the real world.

many corollaries exist. "looking right" is not at all General Artificial Intelligence, is my claim yes.


"Being right" seems to be an arbitrary and impossibly high bar. Human at their very best are only "looks right" creatures. I don't think that the goal of AGI is god-like intelligence.


Humans "at their very best" are at least trying to be right. Language models don't - they are not concerned with any notion of objective truth, or even with "looking right" in order to gain social status like some human bullshitter - they are simply babbling.

That this strategy is apparently enough to convince a large number of (supposedly) intelligent people otherwise is very troubling!

Not saying that General AI is impossible, or that LLMs couldn't be a useful component in their architecture. But what we have right now is just a speech center, what's missing is the rest of the brain.

Also, simply replicating / approximating something produced by natural evolution seems to me like the wrong approach, for both practical and ethical reasons: if we get something with >= human-like intelligence, it would be a black box we could never understand how any part of it actually works, and it might be a sentient being capable of suffering.


What makes it "much more obvious that it is possible" to simulate the human brain? If you're thinking of artificial neural nets, those clearly have nothing to do with human intelligence, which was very obviously not learned by training on millions of examples of human intelligence; that would have been a complete non-starter. But that's all that artificial neural nets can do, learn from examples of the outputs of human intelligence.

It is just as clear that there is one more ability that human brains have, than the ability to learn from observations, and that's the ability to reason from what is already known, without training on any more observations. That is how we can deal with novel situations that we have never experienced before. Without this ability, a system is forever doomed to be trapped in the proximal consequences of what it has observed.

And it is just as clear that neural nets are completely incapable of doing anything remotely like reasoning, much as the people in the neural nets community keep trying, and trying. The branch of AI that Lighthill almost dealt a lethal blow to (his idiotic report brought about the first AI winter), the branch of AI inaugurated and championed by McCarthy, Michie, Simon and Newell, Shannon, and others, is thankfully still going and still studying the subject of reasoning- and making plenty of progress, while flying under the hype.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: