Hacker News new | past | comments | ask | show | jobs | submit login
Computing machinery and intelligence (1950) (loebner.net)
75 points by drikerf on Aug 8, 2016 | hide | past | favorite | 13 comments



Interesting passage:

“The fact that Babbage’s Analytical Engine was to be entirely mechanical will help us to rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical. Since Babbage’s machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. Of course electricity usually comes in where fast signalling is concerned, so that it is not surprising that we find it in both these connections. In the nervous system chemical phenomena are at least as important as electrical. In certain computers the storage system is mainly acoustic. The feature of using electricity is thus seen to be only a very superficial similarity. If we wish to find such similarities we should took rather for mathematical analogies of function.”

It's easy to look at some technology (like machine learning today?) and think: this is how the brain works. But Turing reminds us: not so fast.


It more looks to me that Turing was saying that the essence of the brain is not from its using electricity. Nor is it valid to say that computers and brains are alike because they both operate with electricity; instead, their essential similarity is that they are both expressible in terms of what he called a Universal Machine.


>they are both expressible in terms of what he called a Universal Machine.

Prove it.


We are still waiting on a proof of the Church-Turing Thesis, so maybe that is next on the agenda.


> In certain computers the storage system is mainly acoustic.

I believe that's a reference to mercury delay lines, which preceded ferromagnetic core.


Here are a few points of note for anyone who wants to seriously get to grips with this historically significant article.

1 - Notice that the first paragraph is dedicated to the rejection of the question "Can machines think" later described as "too meaningless to deserve discussion", the proposed experiment is not presented as a way to answer that question, but as a "closely related" replacement.

2 - It's a matter of continuing debate whether Turing actually expected the experiment to be performed, or to remain a thought experiment. Some evidence for the former is that in a less popular paper "Intelligent Machinery" (Turing, 1948) he describes an idealised form of an experiment he did actually perform, where a person plays chess against an opponent that may be either a human player or a human following an algorithm, and attempts to determine the nature of his opponent. Evidence for the latter is that Turing explains the experiment isn't about existent machines, but "imaginable computers which would do well".

3 - The precise experiment is not clear, when "a machine takes the part of [the hidden man]", is the interrogator told he's questioning a man and a woman, or a machine and a woman? Is it significant that the machine takes the male's place, and takes the place of the deceiver? Can questions be directed to one hidden player, or are they always seen by both? Note that to "pass" is not to merely pass as a human, but to be as good as a human at this game of bluff and deception. Note also that the woman's aim is to be correctly identified as such. Also later in the paper Turing mentions "five minutes of questioning", if that includes the time to type the questions to each individual and receive responses, that doesn't leave much time for proper interrogation.

4 - Turing notes that a machine might be doing something that "ought to be described as thinking" and still fail to pass the test, but that should a machine pass we needn't concern ourself with this possibility.


About #2 (or, maybe really about #4), it's interesting how computers are better than people on chess, but still recognizably non-human.


If you're interested in how Turing developed the ideas in this paper, I can fully recommend the Turing biography by Andrew Hodges. I'm halfway though it now. It's an interesting read, as apart from Turning's life, it tries to reconstruct the relationships that he had with fellow researchers, the environment in which he was working and the development of his thinking.


Hodges biography of Turing is a monumental piece of work, covering Turings private and family life, his time as a codebreaker, later work in Manchester attempting to build the first real computer, and lesser discussed work such as his voice encrypting transatlantic telephone and his study of morphogenesis.

You not only get the story of these events, but the work is detailed at such a level as to satisfy an academic.So

if, for example, you've heard the Enigma code breaking story but really wanted to hear the step by step process from transmission to plain text, this book's for you.

The current edition is branded like the Film, but don't let that put you off.


I'll def check that out :). I'm just in the beginning of my first AI course and got this article recommended by the prof. Really enjoyed it.


Haha, this is arguably the article in AI.

Other landmark events in the field include the Dartmouth Conference, when John McCarthy named the field "Artificial Intelligence", defined the goals, and set out to achieve them, unfortunately it's not easy to point to a single paper on the subject, though do read the conference proposal, posted on HN recently [https://news.ycombinator.com/item?id=12080269 ]; then you have the Lighthill report where the UK Government essentially lost interest in the project, the conclusions are debatable, but clearly and entertainingly argued, a video of the presentation of the report, with McCarthy present and responding, is available online also [https://www.youtube.com/watch?v=yReDbeY7ZMU&list=PLhThm05V6b...].

Finally I'd cite the 1970s era book What Computers Can't Do, and its 90s reprint, What Computers Still Can't Do as all you need to know about the current state of AI, its ultimate aims and the fatal flaws in its fundamental assumptions. The fact that it was written in the 70s and applies to today's discussions on AI should be enough to indicate its prescience.

Modern AI ignores or is unaware of many of the critiques that have gone before, only time will tell if they will soon hit the same historical obstacles.

The points I've mentioned here don't really cover the history and development of Neural Networks, but they went through a similar process of discovery, critique, near-dormant research, and finally a return to popularity, but with neglect of the critiques rather than addressing them.

Welcome to the field!


  We can demonstrate more forcibly that any such statement would be unjustified.
  For suppose we could be sure of finding such laws if they existed. Then given
  a discrete-state machine it should certainly be possible to discover by
  observation sufficient about it to predict its future behaviour, and this
  within a reasonable time, say a thousand years. But this does not seem to be
  the case. I have set up on the Manchester computer a small programme using
  only 1,000 units of storage, whereby the machine supplied with one sixteen-
  figure number replies with another within two seconds. I would defy anyone to
  learn from these replies sufficient about the programme to be able to predict
  any replies to untried values.
This is possibly one of the first examples of a pseudorandom function as we understand the term today. I would love to know what Turing's function was, and how breakable it would be with today's techniques.


I really enjoyed the 2014 film about Alan Turing, "The Imitation Game" http://www.imdb.com/title/tt2084970/, named after this article: "The new form of the problem can be described in terms of a game which we call the 'imitation game.'"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: