You're working with a fundamentally different medium, you should be playing to its strengths, not trying emulate the strengths of other mediums.
Think about it: we dont' just get very cheap labor working 24/7 without moral issues or personal needs. We can also make the robots take our jobs very efficiently, and technological progress will be unstoppable (particularly if they can improve themselves). It's the essence of singularity.
That's not unrealistic. Computers already have clock cycles many thousands of times faster than the human brain (our brains get away with it by being massively parallel), have instant access to the entire world's knowledge base, can be scaled up indefinitely (just add another processor), and most importantly, can modify their code at will. If they see an improvement they can make to their own intelligence, they instantly become more intelligent, and get better at finding even more improvements, or designing even better AIs.
There always seems to be an assumption in popular thought that any intelligent machine would necessarily be like us: ie, have drives, motivations, self-preservation instinct, etc. This just isn't the case. Even in us, our rational nature is an appendage to our more base drives and instincts. Intelligence does not come with these motivations and drives, they are completely separate. There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.
There is a concern however, that someone would program an AI specifically with these motivations. In that case we do have everything to worry about.
Do you know of any examples of intelligent beings that don't have any motivations and drives? (Note that "motivations and drives" is a very general term; there's no need for them to be human motivations and drives. I agree the motivations and drives don't have to be human ones; but that's not the same as saying the AI has none at all.)
There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.
If an entity doesn't have some kind of motivation and drive, how can it be intelligent? Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world. (Even if the AI just answers questions put to it, its answers are still actions that have effects in the world.) So an AI has to at least have the motivation and drive to take in information and do things with it; otherwise it's useless anyway.
So given that the AI at least has to "want" to take in information and do things with it, how do you know the things it will want to do with the information are good things? ("Good" here means basically "beneficial to humans", since that's why we would want to build an AI in the first place.) We can say that we'll design the AI this way; but how do we know we can do that without making a mistake? A mistake doesn't have to be "oh, we programmed the AI to want to destroy the world; oops". A mistake is anything that causes a mismatch between what the AI is actually programmed to do, and what we really want it to do. Any programmer should know that this will always happen, in any program.
My computer, for certain definitions of intelligent.
> Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world.
I agree with this; but its behavior does not have to be self-directed to be intelligent. Again, computers behave quite intelligently in certain constrained areas, yet its behavior is completely driven by a human operator. There is no reason a fully general AI must be self-directed based on what we would call drives.
>So an AI has to at least have the motivation and drive to take in information and do things with it
I don't see this to be true either. Its (supposed) neural network could be modified externally without any self-direction whatsoever. An intelligent process does not have to look like a simulation of ourselves.
The word "being" perhaps is the stumbling point here. Perhaps it is true that something considered a "being" would necessarily require a certain level of self-direction. But even in that case I don't see it being possible for a being who was, say, programmed to enjoy absorbing knowledge to necessarily have any self-preservation instinct, or any drives whatsoever outside of knowledge-gathering. All the "ghosts in the machine" nonsense is pure science fiction. I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer. The architecture of the two would be vastly different.
Yes, but I would argue that those definitions are not really relevant to this discussion. You say...
behavior does not have to be self-directed to be intelligent
...which is true, but the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do; we just give it a general goal statement and it figures out how to accomplish it. If we have to continuously intervene to get it to do what we want, what's the point? We have that now. So this...
Its (supposed) neural network could be modified externally without any self-direction whatsoever.
...is also not really relevant, because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.
(Btw, one of the reasons I keep saying this is "the whole point" is that developing such AI's would confer a huge competitive advantage, compared to the "intelligent" machines we have now, that only exhibit "intelligent" behavior with continuous human intervention. So it's not realistic to limit discussion to the latter kind of machines; even if you personally don't want to take the next step, somebody else will.)
The word "being" perhaps is the stumbling point here.
No, I think it's the word "intelligent". See above.
I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer.
I don't think you're trying hard enough to imagine what effects a programming error could have. Have you read any of Eliezer Yudkowsky's articles on the Friendly AI problem?
The architecture of the two would be vastly different.
Why would this have to be the case? Human beings implement both behaviors quite handily on the same architecture.
>the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do;
There is much in between laboriously tell it what to do and having a self-directed entity traipsing in and out of computer networks. A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity. It just needs to be infused with enough real-world knowledge that the supposed optimization problem has a solution.
>because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.
I think you give us too much credit. We may guide our learning processes, but the actual modification of our neural networks is completely out of our control. The distinction here may seem useless, but in this case it is important. A supposed unself-directed-but-intelligent being would simply not have the ability to direct its learning processes. We would still have to bootstrap its learning algorithms on a particular dataset to increase its knowledge base. But this isn't contradictory to general AI, nor would it be useless. In fact, I would say the only thing we would lose is the warm fuzzies that we created "life". It's still just as useful to us if we're in control of its growth.
But a system that can do this and is a self-directed entity provides, as I said, a competitive advantage over a system that can do this but isn't self-directed. So there will be an incentive for people to make the latter kind of system into the former kind.
We may guide our learning processes, but the actual modification of our neural networks is completely out of our control.
As you state it, this is false, because guiding our learning processes is controlling at least some aspects of the modification of our neural networks. But I agree that our control over the actual modification of our neural networks is extremely coarse; most aspects of it are out of our control.
It's still just as useful to us if we're in control of its growth.
No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities. An AI which can control its own growth is only limited by its own mental capacities, which could exceed ours. Since one of the biggest limitations on human progress is limited human mental capacity, an AI which can exceed that limit will be highly desirable. However, the price of that desirable thing is that, since by definition the AI's mental capacity exceeds that of humans, humans can no longer reliably exert control over it.
I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same. As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.
That's because we can define what the solution looks like, in order to train the neural network. We don't understand exactly how, at the micro level, the neural network operates, but we understand its inputs and outputs and how those need to be related for the network to solve the problem.
As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.
You've pulled a bait and switch here. Above, you said we didn't comprehend the internal workings of the network; now you're saying we don't comprehend the solution. Those are different things. If we can't comprehend the solution, we can't know how to train the neural network to achieve it.
If the problem is "enhance AI intelligence", then we will only be able to do if we can comprehend the solution, enough to know how to train the neural network (or whatever mechanism we are using). At some point, we'll hit a limit, where we can't even define what "enhanced intelligence" means well enough to train a mechanism to achieve it.
That's addressing a strawman; I suspect you don't understand the argument. It would be narcissistic to worry about advanced AI meddling in human affairs, unless it were specifically programmed to do so.
But that's not what people are worried about. The concern is that we would be no more to a super-intelligence AI than ants are to us. The concern is that these slow, stupid fleshy bags of meat would be a nuisance in the way of an amoral AI.
Why would this be a concern unless the end result is an AI who decided to get rid of us? I think I understood the argument just fine.
Edit: I think I see where the misunderstanding is. For an AI to decide we were a nuisance it would necessarily need some kind of drive that we were getting in the way of. No drives means no decision regarding out fate.
And all AIs have goals. That's what an AI is: a utility-optimizing machine. Utility implies a goal, and end-game state of affairs with maximal expected utility.
For example, the AI force feeds everyone happy pills to maximize happiness. Or kills everyone to stop anyone from ever suffering again. Or maybe it values lots of beings and so forces us to reproduce as much as possible. All sorts of disturbing worlds are possible if the AI doesn't have exactly the same values we have. And we don't even know what our own values are.
Imagine never missing a proof you need, a bit of news that might instantly leverage your talents, the bottoming out of your short position, a piece of entertainment that you might not hear about for YEARS otherwise.
Now how much would you pay if that plane could flap its wings?
I actually rather enjoy discovering an album or a TV show that I enjoy, even if it's obscure or long-forgotten.
Likewise, if you're trying to beat captchas -- ie simulate human answers, and there seems to be no capturable regularity to the human answers (as suggested in the article) ... then the easiest approach may very well be to simulate human-specific (thinking) dynamics directly.
OTOH you're right that if your goal is not to predict some natural system (birds or humans) but simply to do some more general task very well (flying, or eg computer design), then yes, you're best off discarding nature's designs in favor of an approach that captures the more essential elements of the problem domain (aerodynamic lift or processor efficiency) -- and thereby end up with fixed rather than flapping wings, or binary logic rather than neural nets.
Also, both the participants being tested (AI and human) should be trying to convince the judge that they're the human and the other is the AI. Given that chatterbots that insert "one-liners" and other non-sequitors do well, I doubt that's happening, so again, the Loebner test is useless.
The Turing Test concept appeared when everybody thought intelligence was mostly words, and we could probably easily make computers spit out meaningful sequences of words. Nobody believes intelligence is how well you can write and read anymore (except fad-based media reports "OMG SIRI IZ INTELLUGENT AI SKYNET!"). The Turing Test concept persists because it's simpler than dirt to explain, even though it has never been a valid method of evaluating anything.
In describing the setting of the test, Turing first describes an imitation game where a judge must tell apart a man and a woman. He writes "In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary." So the text format is out of the practicalities of defining a workable test, not because Turing thought 'intelligence is how well you can write and read'.
Your other objection is that there are humans who can't form written sentences. Well, they could use the intermediary, if they cant write.
But, even if you mean that they can't put a verbal sentence together, the test is framed such that a machine that can pass it can be considered to think, not so that any machine (or human) that can think is supposed to be able to pass the test.
Again, from Turing's paper: "May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."
In other words, passing the test - succeeding at the imitation game - is designed to be a sufficient test to demonstrate intelligence, but not a necessary one.
Drifting culture impacts what counts as "passing" too. Way Back Then, everything was slightly more formal, proper, and precise. These days, I have SMS chats with some people who rarely reply with more than one or two words (or maybe an emoji if they're feeling really communicative at the moment).
The point is to prove that the computer is actually understanding what you are saying in normal English. It's not necessary for it to be able to figure out different formats that words can be written, in order to prove that it's intelligent.
Also it's a terrible implementation of the Turing test on purpose. No one has ever come close to real AI that could pass a real Turing test. The point of the prize is more of who can come up with the most convincing chatbots. At least if I understand it correctly.
That's pretty short-sighted. These "easy" problems are stepping stones to much more difficult problems. For instance, you cannot tell a machine something like "build my website" without it being able to solve the easy stuff first.
Oh, but the result is not what you want? Even when communicating with a human, you need to provide some specs, right? But then, natural language is way to imprecise and open to interpretation, coupled with the fact that people don't really know for sure what they want, so even when speaking with a developer, you're going to go through several iterations. So for the ultimate flexibility and automation, the user interface of such a smart appliance will end up being essentially a programming language (being the same reason for why lawyers and mathematicians have their own language that's anything but natural).
Truth of the matter is, computers answering to commands aren't really that interesting, as they already do this. What we really want are computers capable of ideas.
Also, natural language processing is hard because it's based on an incredible amount of implicit context that life taught us ever since we were born and when communicating to our fellow humans, it's never about just the actual words being spoken. When saying something like "build my website" your spouse probably knows you're talking about a website meant for marketing yourself and she probably knows your tastes and values too; she observed your fears and dreams while living your mundane life, even if you've never talked with her explicitly about such issues (speaking of women, they are incredibly good at reading between the lines).
Solving natural language processing is basically akin to creating sentient beings.
No; when a client asks me to build them a website, I don't demand a specification. I interrogate them in regular natural language about each thing, until I'm reasonably sure which things they care about being a certain way (I'll make those things that way to the best of my ability) and which things they don't care how they end up (I'll make those things however I "like" to make them, or randomly if I have no opinion either.)
I think the one thing we're lacking in software UX right now is the concept of a dialogue--you and the computer both asking questions to clarify your mental model of what the other agent currently has in mind, and adding facts to correct misconceptions until those models are aligned.
As fas as I understand the history of AI research, people have also searched for their keys in the dark. They just haven't found them.
AI that can do things we can't easily do: that's for building Multivac.
AI that can do things we can easily do: that's for building Asimov-style robots.
Does anyone know the precise quote from Dijkstra?
Finally, to correct the possible impression that the inability to face radical novelty is confined to the industrial world, let me offer you an explanation of the continuing popularity of artificial intelligence. You would expect people to feel threatened by the “giant brains or machines that think”. In fact, the frightening computer becomes less frightening if it is used only to simulate a familiar noncomputer. I am sure that this explanation will remain controversial for quite some time for artificial intelligence, as a mimicking of the human mind, prefers to view itself as being at the front line, whereas my explanation relegates it to the rearguard. (The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better.
“The question of whether computers can think is like the question of whether submarines can swim”
I don't know about "best", but solving problems that a person can solve will increase the standard of living and create immense economic growth, because solving problems that only humans can solve means that more human work can be automated.
And then there is doing stuff that isn't easy for humans that requires at least the same level of intelligence. Like engineering things or writing computer programs.
Senses do not provide the data for intelligence. Their sole purpose is to provide syncing between reality and internal representation. Intelligence perceives and operates on this internal representation only.
Language is just another sense that helps with syncing of internal representation with some sort reality (might be physical reality, might be reality of social relationships, might even be internal representation of another intelligent being).
I've seen logs of conversations between AI and human, and human and human. What struck me how much more verbose was the AI and human talking to it. Humans between themselves used such shorthands that made the conversation almost unintelligible to me just because I'm not american or even native english speaker.
What useful AI systems up to date lack is the right internal representation. Current interaction between real world and AI systems resemble interaction between an application components and unit testing mockups. Interface is sort of correct but mockups lack the actual meat inside.
AI research should IMHO go towards simulating reality, physical and social in flexible enough way and finding ways of syncing this simulation to reality using narrow data channels such as language, heavily filtered out (moving) images and sounds.
Each time you remember something it's being reconstructed, re-imagined by your brain. Method of treating people with stressful memories is based on that. You allow them to remember the things that bother them in safe, positive environment and after few such recalls, large part of stress associated with this memory disappears because reading memory is sort of "destructive read" and brain to keep that memory has to write it again after read but it's not exactly the same as it was read.
When you look at what's in front of you, you don't see the whole image even though you feel that way. You see only the thing you are focused on (inattentional blindness). The rest is filled in for you. You might argue that this comes directly from previous sensory input. I think you could design experiment where you show a person a misleading image, while influencing brain with TMS to interpret wrong way, focus attention of a person on some part other then the misleading one and then stop influencing the brain so that it could analyze the previously recorded sensory input properly. I'd bet that person would still see the rest of the image as he was seeing when his brain was influenced until you allowed him to focus attention on the misleading parts. Only then he'd be able to readjust.
Perception of illusions also indicate existence of internal representation for me. With http://en.wikipedia.org/wiki/Spinning_Dancer illusion you can feel the moment when your brain switches between two completely different representations of ambiguous image.
When you hear sounds, what you hear and remember is not just sensory input. What you hear depends on the context. Awesome demonstration of that effect: http://www.youtube.com/watch?v=8T_jwq9ph8k&feature=player_de...
When you hear the song backward for the first time you hear nothing. But when you see the text you were supposed to hear you can actually hear those words. I think that just presenting you with the text and telling you to remember if you heard this text the first time wouldn't change what you remember hearing. Try to pause when the text for reverse version shows up and see if your memory of reversed song playing matches any of that text.
I think this effect happens because senses are very thin, and sensory input is aggressively used to construct internal representation (with much of the representation created from experience) and then discarded.
Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close. The stuff of mental computation to an extent is our memories of sensory inputs themselves, or abstractions over similar classes of inputs.
Thinking about it further, our ideas may not be as far apart as they seem.
Do you consider the "sensory input" as, say, the light waves hitting the retina, or the set of neural states triggered that induces a "qualia" experience of sight? In my explanation I was considering the qualia as the sensory input rather than the frequencies of light. Perhaps you're using the other definition?
I consider sensory input everything from retina up to a point when you become aware that the horse just passed you.
I think that only this high level information gets stored and is used for all intellectual activity. Actual sight, sound and smell of a horse is just stored to the extend that allows to recognize horses better in the future but it's not the part of any reasoning you might have later of why the horse was there, where was it going and whether it would be cool to own a horse. You use abstract representation of a horse for all those thoughts.
> Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close.
You feel that but if you tried to draw or sculpt a horse you'd see how many pieces you thought you recalled you actually made up or have no idea of how they really look. If I'm not mistaken you admit that the horse you try to imagine gets rebuilt from bits and pieces that are stitched together. In my opinion foundation of that construct is that internal abstract representation of a horse concept.
> (i.e. the experience that a silhouette is likely a 3D object that is spinning in a particular direction)
In my opinion brain doesn't switch between spin right, spin left, but between, this person is slightly above me, this person slightly is below me. Change in the direction of rotation is just what tells you very clearly that your brain just switched. Not only perception of a 3d object changes but the whole scene, relation between observer and the object.
The way I imagine this works is that our sensory input fires some particular set of neurons which accounts for our sensory experience of the horse. When we recall a mental image of a particular horse, our brain attempts to recreate as best it can the neural firing pattern from the actual sensory input. Of course, this pattern gets distorted as we do not remember specific images as a whole (unless one has a photographic memory), but pieces of images that represent certain abstractions over portions of a subject. These patterns are recreated by firing certain "bootstrap" neurons (memories units) that downstream cause the recreated pattern.
Expanding on this further, I can imagine our image-storage system being something like a many-dimensional quadtree, except instead of just spacial dimensions it also extracts colors, shapes, patterns, textures, etc. So different meaningful concepts are stored in different layers of the neural network, and some approximation to the original can be recreated on demand. This can certainly be considered an abstract representation, yet it is still tied to and semantically similar to a raw 2d mapping of the image. The difference is mainly storage efficiency due to compressing similar concepts learned from our experiences.
This is very true. But I think that recent AI experts (as opposed to those doing this work in the 70s) have realized that trying to tackle linguistic analysis is very very (very) hard. The problem with language (or more correctly, discourse) analysis is that even outside the realm of computing, it still hasn't been fully explicated.
A couple of months ago I took a graduate philosophy of language seminar (taught by the brilliant Sam Cumming at UCLA) in which we looked at various theories of discourse. It would be an understatement to say that these theories vary wildly. We have the classical RST (Rhetorical Structure Theory) by Mann and Thompson (renowned linguists at USC & UCSB), Jan Van Kuppevelt's erotetic model, Andrew Kehler's Theory of Grammar and a half-dozen or so more that I don't even remember.
So let's forget about computers for a second. We don't even know how humans process discourse. My term paper was about the parallel relation which is a very talked-about topic (almost as much as anaphora; see The New Yorker article) in the academic community; not only are such linguistic phenomena difficult to theoretically model, they are nigh impossible to practically implement (say, in some sort of AI schema).
So I'm not even surprised most AI folks just started doing work on SVM's or ANN's, or Markov Chains, or what-have-you. It seems more practical to do work on stuff that could actually benefit from machine learning, as opposed to trying to solve incredibly difficult (and mostly theoretical) problems like discourse analysis.
The bottom line is that we're still a ways off from having computers like those in Star Trek - computers that understand anaphora, parallelism, ellipses, etc, etc.
I believe that it will not be necessary to cope with the corner cases of discourse for a machine to be regarded as intelligent, and I believe that machines that can cope with those corner cases (if realisable ever) may not be regarded as intelligent (I mean, doing this is not sufficient).
I think that in this sense discourse analysis is rather like computer chess - a goal that if attained may seem rather irrelevant to the grander challenge of creating machines that are capable of significant contributions to our culture.
I think that we risk attacking the problems that appear to us to be interesting.
>> It seems more practical to do work on stuff that could actually benefit from machine learning, as opposed to trying to solve incredibly difficult (and mostly theoretical) problems like discourse analysis.
The thing I am not sure about is how much it makes sense to attempt to create discourse analyses vs doing more "practical" work. The reason is that a complete and accurate discourse analysis seems nearly impossible. Having read through a few reference grammars and phonological analysis of various languages myself, it seems any rules linguists create are always wrought with copious exceptions and special cases (though certainly not all! There are examples of phonological rules that are very very regular across languages). Some of these analyses were extremely intricate.
I got the impression that many of them suffered from the mistake of trying to prescribe patterns to data that does not have any patterns.
So I don't fault the CS community from stepping away from linguistic puzzles and getting on with more useful stuff. After all, that's what engineering is. It looks like Levesque disagrees, but I'm not sure he's right.
Can an alligator run the hundred-metre hurdles?
http://www.animalquestions.org/reptiles/alligators/can-allig... (many have thought was a myth, it is true that alligators are in fact able to climb fences.)
These anaphora questions are better solved through natural language parsing, for example turning them into predicate logic. The article is right when it says that big data algo's have a problem with these sentences. But that is not to say they can't help. In the article:
The town councillors refused to give the angry
demonstrators a permit because they feared violence. Who
From the Wikipedia article on anaphora:
We gave the bananas to the monkeys because they were hungry.
We gave the bananas to the monkeys because they were ripe.
We gave the bananas to the monkeys because they were here.
Using semantic closeness can also work against you in the case of ambiguity:
The robber sits on the bank.
This is a decades-old hard problem that is close to philosophy. Is the robot inside Searle's Chinese room intelligent? Is it really understanding what it is saying? Or is it "faking" it?
Only for that particular example. It's easy to come up with examples where that heuristic fails:
"The town council denied the protestors a permit because they were in a bad mood."
"The factory failed to produce the car on time because it was undergoing maintenance."
It's not the case that every time an ambiguous sentence is used in real life people will stop and consider the possible cases. Granted, sometimes we will make the wrong assumption on how to disambiguate sentences that are absolutely ambiguous.
Another example where semantic closeness won't really help you (though context/situation would help):
"I looked at the man with the telescope."
Case 1: I looked at the man using the telescope.
Case 2: I looked at the man who was holding the telescope.
the difference between
We gave the bananas to the monkeys because they were hungry.
We gave the bananas to the monkeys because they were ripe.
Similarly, out of all the meanings of 'bank', there's only one that can be sat on.
Consider me naive, but it shouldn't be too hard to build a program backed by nothing more than an SQL database that can handle these.
And consider me SUPER naive, but that's exactly what I am doing at the moment in my spare time.
The idea that pops into my mind is that whenever we have speech output from computers to humans, the computer should structure the sentences to minimize ambiguity (while remaining efficient). It would be unfortunate indeed if a human misunderstood a computer's instructions or warnings due to ambiguity on the part of the machine.
I gather Cyc has long been considered a failure by many in the AI community, and perhaps even an embarrassment by some. Still it is hard not to read Levesque's paper as a vigorous endorsement of the goals of the Cyc project, if perhaps not of its exact methods -- see sec. 4.4. I think it is incumbent on Levesque to explain what Cyc has done wrong, if he has a good theory about that -- and if he doesn't, he needs to admit it, because Cyc's ostensible failure makes it seem unlikely that any symbolic, reasoning-oriented approach to AGI could ever succeed.
Had Lenat published working versions or even complete explanations of his previous software (AM, EURISKO) I might be a little more sympathetic. Cyc, the project that never ends, appears to be in a state of perpetual partial implementation, it's most notable product being a constant stream of money into the project.
But look on the bright side: perhaps I'm completely wrong, Cyc has actually achieved AI, and the NSA is using it right now!
I take it then, seiji, that you also completely disagree with Levesque's program as expressed in section 4.4 of his paper?
Discrete codified knowledge is not the stuff the universe is made of. The problem Cyc will never solve is the "a picture is worth a thousand words" problem. Describing everything in a relational, hierarchal, predicate calculus arrogantly ignores the unrelated multidimensionality of, well, everything.
People in academia can get ResearchCyc and see just what its capabilities are these days; it's basically the same as full Cyc. OpenCyc is a shadow of the real thing, supposedly.
Part 1 - Existence proofs
We think we can build 'intelligent' machines because human intelligence exists. We don't have a perfect idea of what it means to be intelligent, but we know, whatever we have exists so it's not impossible to create. Some disagree with this stance on philosophical or religious grounds, and others resort to pointing out even if it's possible it's really really hard. Fair enough.
Part 2 - Methods
Usually we start by picking the thing we think best typifies intelligence and we work to solve that. Chess, Written Language Comprehension, Visual Pattern Recognition, Spoken Language, etc. These are all called AI. All well and good, but so far we get exceptional single purpose systems (the narrower the domain the better) and then look around and say 'but that's not really intelligent.'
Lately people have started to think more about the process by which things become intelligent rather than just the end behavior. Somehow the machine should naturally transition to being intelligent as opposed to being explicitly programed with intelligence. Machine learning is so-hot-right-now because of this. But, again, in the end you get a well trained system that does whatever it used as it's error metric. Like differentiate cats and sailboats. "But that's not really intelligent because it can't [play mozart/read poetry/paint a picture]."
There are infinite criticisms, but they can be summed to a lack of 'generalness.'
Part 3 - Representation
What people searching for 'intelligence' are looking for is a system that can process data from at least as many sources in as least as many contexts as a human. The hard part there, and the one thing the brain does really really well, is being able to relate sight to sound, touch to taste, past to present, and present to future. In us there is a shared language of representation that encodes experience.
In AI so far it's an unusual system that tries to relate many senses, keep a life long memory, and work in a noisy and incomplete environment and constantly make predictions about what will happen next.
Part 4 - Data
It is an unusual AI researcher who has all the data they want. As computer people we are impatient, and so waiting 30 years for a robot to collect the 1.4 PB of visual information a human does by that age, or the 1.8 TB of audio information just isn't done. We use existing datasets that are computationally tractable (meaning you can run them in minutes, hours, or days).
And yet we do not have an existence proof that intelligence of the general human-like kind can exist without years of exposure to the world. It's reasonable to expect that it's possible, we just don't have pre-existing knowledge of that fact.
Part 5 - The Future
So how will we get there from here? We're probably going to have to do it the hard way. Create something that can sense the world in the ways that we can comprehend, and painstakingly rear it, collecting data, automating and hard-coding what we can, until we have a set of error metrics, motivations, data, and environment in which we begin to see the thousand different skills called 'intelligence' that humans take for granted. In short, we'll probably end up thinking a bit more like parents, and a bit less like computer scientists.
What you express in the quote above is a very "computationalist" view to AI. FWIW, in contrast, I'm more of a "connectionist" . This means that when I look at the same incredible "general intelligence" of humans -- this almost "synesthetic" ability to abstract and connect ideas -- I do not see a powerful translation and symbolic reasoning machine with some seemingly magical universal language buried deep within. In fact, I believe there can be no such language without severely compromising the reasoning system's generality.
Instead, I see a powerful "connection machine", where concepts, thoughts, language, etc. are all the result of some incredibly versatile connective learning/creativity process. Of particular importance, I see this connection machine thrive and prosper within systems where there are no axioms, no ground truth, no single agreed-upon notion of what separates "this" from "that" (you'll see this in humans when studying philosophy). The analogy to this notion from a computationalist perspective would be a machine whose instruction set, or core reasoning language, is always in a state of flux.
I will freely admit though that one view does not necessarily have any more explanatory power than the other; they both rely on some "magic" unknown assumptions. For a connectionist, the "magic" is the connective learning algorithm. For a computationalist, the "magic" is in this universal language/symbolic system.
Disclaimer: I'm by no means an AI expert (still have a lot more to learn!), but I always enjoy thinking/discussing these topics and the surrounding philosophy.
How does your connectionist interconnected networks of simple units actually give rise to general AI? Answer that and you'll have the "shared language of representation" that the OP was talking about.
It's also important to realize that these aren't beliefs, or truth claims, or scientific claims. They're philosophical perspectives; no more, no less. They might guide the intuition, but have no bearing on the science itself. Someone who doesn't clearly understand the distinction between the philosophy and science of a topic may definitely risk either contributing to the "depressingly common fallacy in science" you mention, or risk misinterpreting a philosophical argument for a scientific one, and hence through blurred vision believe they see fallacy when in fact there is none.
One way of looking at it is both views are different philosophical angles of the same thing (or at least the same problem/mystery). A connectionist sees this conception of a "shared language of representation" as assigning a sacred mystery (what does this language actually consists of, precisely?) to an as-yet unexplained phenomenon, in the same way a computationalist sees this of the connectionist's learning algorithm (how does this learning algorithm work, precisely?)
The reason I highlight this philosophical symmetry is to emphasize that these are merely different intuitive mindsets developed towards approaching the common mystery of general intelligence.
The bottom line is so long as human-like "general intelligence" is a mystery (to the extent that we can't replicate it 100%+ effectively in computers), it's going to be an "unexplained phenomenon", and thus any theories developed around it will have some "magic" hole somewhere -- some key element devoid of predictive power. (Because if there were no such hole, then by definition, we'd already have it all figured out.)
2. While every single human must go through the 10-20 years learning process, there are no reasons to duplicate that process for AI. Connection diagram and weight matrix could be easily copied between ANNs, so that they can constantly build on top of each other knowledge.
General intelligence seems to be just about every technique people can think of, thrown into a giant feedback loop. Unsupervised learning supervising other supervised learning. Classifiers classifying different possible classifications. Modeling by projecting known models. And, it all happens without you. "I" stand on the shoulders of a giant, just slightly above sea level, with all the inner workings hidden below.
See also the "artificial toddler":
First you have to come up with a way to automatically generate these questions though. If we have to make them manually, the machines can just remember all the answers.
Creating the questions seems much harder than answering them.
If you ask a few questions with multiple possible answers it's pretty unlikely it would get them all correct through random guesses anyways.
And a very large styrofoam ball can crash a comparatively small table made of wood, all depends on relative definition of 'large' right?
I think the problem is that everybody can pick his favorite feature and define it as the key of 'general intelligence'. Some say Anaphora, some say machine learning; I like Hofstaedter he says that it is all about analogy. http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...
Also: The problem is that these statements can't be proven; it is all about opinions and dogmas; I think the argument is s question of power: the one who wins the argument has the power over huge DARPA funds, or whoever else gives out grants for this type of research. The run after defining problems (expert systems, big data) in AI might have something to do with the funding problem.
The 'Society of Mind' argument says that there are many agents that together somehow miraculously create intelligence. http://en.wikipedia.org/wiki/Society_of_Mind
This argument sounds good, but it makes it hard to search for general patterns/universal explanations of intelligence.
On the one hand they have to focus on some real solvable problem, on the other hand that makes it very hard to ask and find answers to general questions; I don't know if there will be some solution to this dilemma.
Maybe the problem needs and idle class of Brahmi who can ask questions and ponder about them without end, without having to worry about questions of funding ?
The example I recall they were able to answer correctly was 'Who's the president of the USA?' And then 'How tall is he?'
"Think of a giant flightless bird. Now, name a color that starts with the same first letter."
So, then, maybe one approach to
AI is via natural language --
program a computer to understand
natural language, just typed
input should be sufficient initially.
How to do that? My two kitty cats
understand some natural language,
and I have to suspect that I roughly
understand how they did that
and how I could program some of it.
Human babies learn natural language,
and we have to suspect that the
effort is a bootstrap where learn
some really simple things
-- e.g., "Ma Ma" and "Da Da" --
and then build on those.
"Nice". "Bad". "Ma Ma nice."
"Da Da bad". "Food". "Hungry".
"Hungry want food.".
hungry and want food, go to
master, reach up with front paw
and use claws to
pull on shirt but don't pull on skin." --
my kitty cats both already figured
this out either independently or
learned from each other.
"I can see." "Can he see me?".
Kitty cats know that very well,
and if they want to scratch or
bite (one cat long ago, just
rescued), know to wait until
the target can't see the
claw or mouth about to bite.
So, to come in from the back
porch, wait until there is noise
indicating that I'm at the kitchen
sink and take a position on the
porch so that can be seen -- then
I will let them back in.
Then build on such simple things.
That's what I thought long ago.
Once I asked DARPA about it, and
they had no response.
The author of the OP has another
article on how birds and babies
learn to understand language.
So, maybe more than one person
is thinking along those lines.
Doing it first with just text
input should show the core problems
and be sufficient.
One problem: Kitty cats have
great internal 3-D geometry.
E.g., if the mouse runs clockwise
around a packing box, then the
cat can be smart enough to
run counterclockwise. So, the
cat understands the 3-D box
and paths in space. They're
not stupid you know! How to
program that? Hmm ...!
Or another initial application of artificial intelligence I think is in trading financial markets, and distilling every point of data to create models of predicting markets and making obscene amounts of money.
The Turing test seems a red herring, since afaik it's not a big part of research evaluation currently.
2. Even if one accepted that NLP is AI-complete: this is a sufficient condition and not a necessary condition. But the claim being made in the articles is that it is a necessary condition: that is, that other AI disciplines are distractions because they do not lead to the AI grail that NLP supposedly leads to. This is classic GOFAI hogwash.
But with the same amount of resources, human engineers are way better at designing things than evolution. If you give humans enough time, we can figure out how to make intelligence, and we can probably do a way better job than nature. At worst we simply need to reverse engineer how nature did it, at best we find an even better way.
I'm curious -- what's your background? Also: try using paragraphs.