Hacker News new | comments | show | ask | jobs | submit login
Artificial Intelligence and What Computers Still Don't Understand (newyorker.com)
185 points by phreeza on Aug 17, 2013 | hide | past | web | favorite | 128 comments



I'm coming around to the idea that getting computers to act like people is like building planes that flap their wings because that's what birds do.

You're working with a fundamentally different medium, you should be playing to its strengths, not trying emulate the strengths of other mediums.


But the reward is so great that we can't dismiss the possibility of finding the special wings that are appropriate for that priceless plane.

Think about it: we dont' just get very cheap labor working 24/7 without moral issues or personal needs. We can also make the robots take our jobs very efficiently, and technological progress will be unstoppable (particularly if they can improve themselves). It's the essence of singularity.


It's also really, really dangerous. Imagine an amoral being without that is hundreds or thousands of times more intelligent than humans. Do you think you could control it?

That's not unrealistic. Computers already have clock cycles many thousands of times faster than the human brain (our brains get away with it by being massively parallel), have instant access to the entire world's knowledge base, can be scaled up indefinitely (just add another processor), and most importantly, can modify their code at will. If they see an improvement they can make to their own intelligence, they instantly become more intelligent, and get better at finding even more improvements, or designing even better AIs.


>Do you think you could control it?

There always seems to be an assumption in popular thought that any intelligent machine would necessarily be like us: ie, have drives, motivations, self-preservation instinct, etc. This just isn't the case. Even in us, our rational nature is an appendage to our more base drives and instincts. Intelligence does not come with these motivations and drives, they are completely separate. There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

There is a concern however, that someone would program an AI specifically with these motivations. In that case we do have everything to worry about.


Intelligence does not come with these motivations and drives, they are completely separate.

Do you know of any examples of intelligent beings that don't have any motivations and drives? (Note that "motivations and drives" is a very general term; there's no need for them to be human motivations and drives. I agree the motivations and drives don't have to be human ones; but that's not the same as saying the AI has none at all.)

There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

If an entity doesn't have some kind of motivation and drive, how can it be intelligent? Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world. (Even if the AI just answers questions put to it, its answers are still actions that have effects in the world.) So an AI has to at least have the motivation and drive to take in information and do things with it; otherwise it's useless anyway.

So given that the AI at least has to "want" to take in information and do things with it, how do you know the things it will want to do with the information are good things? ("Good" here means basically "beneficial to humans", since that's why we would want to build an AI in the first place.) We can say that we'll design the AI this way; but how do we know we can do that without making a mistake? A mistake doesn't have to be "oh, we programmed the AI to want to destroy the world; oops". A mistake is anything that causes a mismatch between what the AI is actually programmed to do, and what we really want it to do. Any programmer should know that this will always happen, in any program.


>Do you know of any examples of intelligent beings that don't have any motivations and drives?

My computer, for certain definitions of intelligent.

> Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world.

I agree with this; but its behavior does not have to be self-directed to be intelligent. Again, computers behave quite intelligently in certain constrained areas, yet its behavior is completely driven by a human operator. There is no reason a fully general AI must be self-directed based on what we would call drives.

>So an AI has to at least have the motivation and drive to take in information and do things with it

I don't see this to be true either. Its (supposed) neural network could be modified externally without any self-direction whatsoever. An intelligent process does not have to look like a simulation of ourselves.

The word "being" perhaps is the stumbling point here. Perhaps it is true that something considered a "being" would necessarily require a certain level of self-direction. But even in that case I don't see it being possible for a being who was, say, programmed to enjoy absorbing knowledge to necessarily have any self-preservation instinct, or any drives whatsoever outside of knowledge-gathering. All the "ghosts in the machine" nonsense is pure science fiction. I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer. The architecture of the two would be vastly different.


for certain definitions of intelligent

Yes, but I would argue that those definitions are not really relevant to this discussion. You say...

behavior does not have to be self-directed to be intelligent

...which is true, but the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do; we just give it a general goal statement and it figures out how to accomplish it. If we have to continuously intervene to get it to do what we want, what's the point? We have that now. So this...

Its (supposed) neural network could be modified externally without any self-direction whatsoever.

...is also not really relevant, because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.

(Btw, one of the reasons I keep saying this is "the whole point" is that developing such AI's would confer a huge competitive advantage, compared to the "intelligent" machines we have now, that only exhibit "intelligent" behavior with continuous human intervention. So it's not realistic to limit discussion to the latter kind of machines; even if you personally don't want to take the next step, somebody else will.)

The word "being" perhaps is the stumbling point here.

No, I think it's the word "intelligent". See above.

I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer.

I don't think you're trying hard enough to imagine what effects a programming error could have. Have you read any of Eliezer Yudkowsky's articles on the Friendly AI problem?

The architecture of the two would be vastly different.

Why would this have to be the case? Human beings implement both behaviors quite handily on the same architecture.


I have a feeling our disagreement is largely one of terminology.

>the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do;

There is much in between laboriously tell it what to do and having a self-directed entity traipsing in and out of computer networks. A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity. It just needs to be infused with enough real-world knowledge that the supposed optimization problem has a solution.

>because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.

I think you give us too much credit. We may guide our learning processes, but the actual modification of our neural networks is completely out of our control. The distinction here may seem useless, but in this case it is important. A supposed unself-directed-but-intelligent being would simply not have the ability to direct its learning processes. We would still have to bootstrap its learning algorithms on a particular dataset to increase its knowledge base. But this isn't contradictory to general AI, nor would it be useless. In fact, I would say the only thing we would lose is the warm fuzzies that we created "life". It's still just as useful to us if we're in control of its growth.


A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity.

But a system that can do this and is a self-directed entity provides, as I said, a competitive advantage over a system that can do this but isn't self-directed. So there will be an incentive for people to make the latter kind of system into the former kind.

We may guide our learning processes, but the actual modification of our neural networks is completely out of our control.

As you state it, this is false, because guiding our learning processes is controlling at least some aspects of the modification of our neural networks. But I agree that our control over the actual modification of our neural networks is extremely coarse; most aspects of it are out of our control.

It's still just as useful to us if we're in control of its growth.

No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities. An AI which can control its own growth is only limited by its own mental capacities, which could exceed ours. Since one of the biggest limitations on human progress is limited human mental capacity, an AI which can exceed that limit will be highly desirable. However, the price of that desirable thing is that, since by definition the AI's mental capacity exceeds that of humans, humans can no longer reliably exert control over it.


>No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities

I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same. As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.


I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same.

That's because we can define what the solution looks like, in order to train the neural network. We don't understand exactly how, at the micro level, the neural network operates, but we understand its inputs and outputs and how those need to be related for the network to solve the problem.

As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.

You've pulled a bait and switch here. Above, you said we didn't comprehend the internal workings of the network; now you're saying we don't comprehend the solution. Those are different things. If we can't comprehend the solution, we can't know how to train the neural network to achieve it.

If the problem is "enhance AI intelligence", then we will only be able to do if we can comprehend the solution, enough to know how to train the neural network (or whatever mechanism we are using). At some point, we'll hit a limit, where we can't even define what "enhanced intelligence" means well enough to train a mechanism to achieve it.


> There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

That's addressing a strawman; I suspect you don't understand the argument. It would be narcissistic to worry about advanced AI meddling in human affairs, unless it were specifically programmed to do so.

But that's not what people are worried about. The concern is that we would be no more to a super-intelligence AI than ants are to us. The concern is that these slow, stupid fleshy bags of meat would be a nuisance in the way of an amoral AI.


>The concern is that these slow, stupid fleshy bags of meat would be a nuisance in the way of an amoral AI

Why would this be a concern unless the end result is an AI who decided to get rid of us? I think I understood the argument just fine.

Edit: I think I see where the misunderstanding is. For an AI to decide we were a nuisance it would necessarily need some kind of drive that we were getting in the way of. No drives means no decision regarding out fate.


Humans have an annoying tendency to monopolize the energy and material resources of the planet we occupy... that could be a nuisance for an AI with other plans, without the end-goal being anything to do with humans at all.

And all AIs have goals. That's what an AI is: a utility-optimizing machine. Utility implies a goal, and end-game state of affairs with maximal expected utility.


An intelligence without a goal or motivation wouldn't be completely useless. Sure it would be safe to be around, but it also wouldn't have an reason to improve it's own intelligence or do anything useful. It would just do nothing because there would be no reason for it to do anything.


Right. It would do nothing until we gave it a command to carry out. Personally, I would prefer it that way.


Once it has a command to carry out, it is no longer idle. It has a goal, to fill out whatever that command is. If the goal you give it is not exactly the same as humanity's goals, then it could do things that were unintended, or things that conflict with our normal goals (i.e. you ask it to make money so it goes and robs a bank.) I seriously suggest reading these:

http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/

http://wiki.lesswrong.com/wiki/Paperclip_maximizer


Interesting ideas, not something I had previously thought of.


So essentially what happens if your AI turns out to be like Dexter? I suspect that in early forms that might be almost MORE likely since the character (and the theory he's based on) is essentially human with a "bug" in dealing with morality. Which leads to another point, you determine your self aware AI is likely to commit murder based on a debug session. Are you allowed to wipe it without having a trial first?


Empathy is a very specific trait that evolved in humans, it would be unlikely that the first AI's would have it, and if they did, it wouldn't likely be exactly the same as the version humans have. I expect the first AIs to be psychopathic/amoral, or else have an entirely different moral system than our own. The first is scary enough, the second could lead to very disturbing dystopias.

For example, the AI force feeds everyone happy pills to maximize happiness. Or kills everyone to stop anyone from ever suffering again. Or maybe it values lots of beings and so forces us to reproduce as much as possible. All sorts of disturbing worlds are possible if the AI doesn't have exactly the same values we have. And we don't even know what our own values are.


I think as long as we do not understand the relation between intelligence and conscience, we should probably assume that a AGI has moral issues and personal needs.


Consider the strength of an AI that could know you and your tastes, ideas, interests and concerns very well, continually scan all communications it could access, and continually keep a list for you of the top-ten things you might want to know about.

Imagine never missing a proof you need, a bit of news that might instantly leverage your talents, the bottoming out of your short position, a piece of entertainment that you might not hear about for YEARS otherwise.

Now how much would you pay if that plane could flap its wings?


Personally I think that sounds terrible. What is there for me to do in this system? Equally, how would I ever discover new interests and ideas?

I actually rather enjoy discovering an album or a TV show that I enjoy, even if it's obscure or long-forgotten.


But if you want to build a classifier that distinguishes a bird flight path from that of an airplane (or from a helicopter), you're going to have to model bird-specific dynamics at some point.

Likewise, if you're trying to beat captchas -- ie simulate human answers, and there seems to be no capturable regularity to the human answers (as suggested in the article) ... then the easiest approach may very well be to simulate human-specific (thinking) dynamics directly.

OTOH you're right that if your goal is not to predict some natural system (birds or humans) but simply to do some more general task very well (flying, or eg computer design), then yes, you're best off discarding nature's designs in favor of an approach that captures the more essential elements of the problem domain (aerodynamic lift or processor efficiency) -- and thereby end up with fixed rather than flapping wings, or binary logic rather than neural nets.


Aside: The Loebner prize is a terrible implementation of a Turing test because the judges have no idea what they're doing. You can't just try to have a normal conversation and see which one sounds more human. You _have_ to try tricky things like the article mentions; beside syntactically ambiguous sentences, you might also try things that would be obvious to a human but that a computer would easily get confused by unless it was programmed to check for it. Likenotusingspaces. O.r. .i.n.t.e.r.l.e.a.v.i.n.g. .l.e.t.t.e.r.s. .w.i.t.h. .s.o.m.e. .r.a.n.d.o.m. .c.h.a.r.a.c.t.e.r.

Also, both the participants being tested (AI and human) should be trying to convince the judge that they're the human and the other is the AI. Given that chatterbots that insert "one-liners" and other non-sequitors do well, I doubt that's happening, so again, the Loebner test is useless.


"Is it an AI" tests based only on text are pretty silly anyway. There are probably a few billion people alive who can't form complete written sentences. If you have a job, you probably encounter dozens of people a year who can't write coherent emails.

The Turing Test concept appeared when everybody thought intelligence was mostly words, and we could probably easily make computers spit out meaningful sequences of words. Nobody believes intelligence is how well you can write and read anymore (except fad-based media reports "OMG SIRI IZ INTELLUGENT AI SKYNET!"). The Turing Test concept persists because it's simpler than dirt to explain, even though it has never been a valid method of evaluating anything.


I don't think you are being fair on the Turing test, or on the intelligence of its creator.

In describing the setting of the test, Turing first describes an imitation game where a judge must tell apart a man and a woman. He writes "In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary." So the text format is out of the practicalities of defining a workable test, not because Turing thought 'intelligence is how well you can write and read'.

Your other objection is that there are humans who can't form written sentences. Well, they could use the intermediary, if they cant write.

But, even if you mean that they can't put a verbal sentence together, the test is framed such that a machine that can pass it can be considered to think, not so that any machine (or human) that can think is supposed to be able to pass the test.

Again, from Turing's paper: "May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."

In other words, passing the test - succeeding at the imitation game - is designed to be a sufficient test to demonstrate intelligence, but not a necessary one.


You are quite right. Oddly, I think there's a distinction between the mass-media version of what a Turing Test is and the actual imitation game.

Drifting culture impacts what counts as "passing" too. Way Back Then, everything was slightly more formal, proper, and precise. These days, I have SMS chats with some people who rarely reply with more than one or two words (or maybe an emoji if they're feeling really communicative at the moment).


Sorry, I normally hate nitpicking but since I saw it three times: it's Turing, not Turning.


Wow, I didn't intend that at all (and in all three places too, that's quite a failure). Thanks for noticing. Either my fingers are misbehaving or autocorrect needs a bigger dictionary.


The thing about changing the spacing of words isn't really fair on the computer. It'd be like spelling words phonetically and expecting a deaf person to understand it. Or if I gave you text in binary and expected you to understand quickly.

The point is to prove that the computer is actually understanding what you are saying in normal English. It's not necessary for it to be able to figure out different formats that words can be written, in order to prove that it's intelligent.

Also it's a terrible implementation of the Turing test on purpose. No one has ever come close to real AI that could pass a real Turing test. The point of the prize is more of who can come up with the most convincing chatbots. At least if I understand it correctly.


That would be a very useful test of telling deaf and hearing people appart.


Essentially, the article accuses AI researchers of searching for their lost keys under streetlamps. Fair enough, but I wonder if the best use of AI research is to get computers to solve problems that a person could easily solve. I thought the point of, e.g. a search engine is to solve problems a person can't easily solve.


> I wonder if the best use of AI research is to get computers to solve problems that a person could easily solve

That's pretty short-sighted. These "easy" problems are stepping stones to much more difficult problems. For instance, you cannot tell a machine something like "build my website" without it being able to solve the easy stuff first.


Oh come on, sure you can tell a computer "build my website". You can use language, or you can click a button.

Oh, but the result is not what you want? Even when communicating with a human, you need to provide some specs, right? But then, natural language is way to imprecise and open to interpretation, coupled with the fact that people don't really know for sure what they want, so even when speaking with a developer, you're going to go through several iterations. So for the ultimate flexibility and automation, the user interface of such a smart appliance will end up being essentially a programming language (being the same reason for why lawyers and mathematicians have their own language that's anything but natural).

Truth of the matter is, computers answering to commands aren't really that interesting, as they already do this. What we really want are computers capable of ideas.

Also, natural language processing is hard because it's based on an incredible amount of implicit context that life taught us ever since we were born and when communicating to our fellow humans, it's never about just the actual words being spoken. When saying something like "build my website" your spouse probably knows you're talking about a website meant for marketing yourself and she probably knows your tastes and values too; she observed your fears and dreams while living your mundane life, even if you've never talked with her explicitly about such issues (speaking of women, they are incredibly good at reading between the lines).

Solving natural language processing is basically akin to creating sentient beings.


> So for the ultimate flexibility and automation, the user interface of such a smart appliance will end up being essentially a programming language

No; when a client asks me to build them a website, I don't demand a specification. I interrogate them in regular natural language about each thing, until I'm reasonably sure which things they care about being a certain way (I'll make those things that way to the best of my ability) and which things they don't care how they end up (I'll make those things however I "like" to make them, or randomly if I have no opinion either.)

I think the one thing we're lacking in software UX right now is the concept of a dialogue--you and the computer both asking questions to clarify your mental model of what the other agent currently has in mind, and adding facts to correct misconceptions until those models are aligned.


In order for a computer to be able to build your website, it would have to be pretty intelligent. The computer would have to be able to learn on it's own, reason, etc. These aren't stepping stones towards that. A bunch of special purpose algorithms is good, but you have to have a fully general one to do most the work. Once you have that, you could just have it learn the rest on it's own.


And once a machine has learned to do something "easy" in the same way we do, that is via sensory perception, it will be extremely exciting to see how that knowledge can be forked and evolved via the excellent copyability inherent in computers.


> Essentially, the article accuses AI researchers of searching for their lost keys under streetlamps.

As fas as I understand the history of AI research, people have also searched for their keys in the dark. They just haven't found them.


I want both.

AI that can do things we can't easily do: that's for building Multivac.

AI that can do things we can easily do: that's for building Asimov-style robots.


This comment with the original article reminds me of a quote from Dijkstra on how pointless it is to get AI to do human work, but instead to enhance human's abilities. For instance, my intelligence combined with a computer + terminal + any REPL >>> my intelligence otherwise.

Does anyone know the precise quote from Dijkstra?


I don't know the quote in question, but it sounds interesting. A little searching turned up two interesting thoughts from Dijkstra on AI:

Finally, to correct the possible impression that the inability to face radical novelty is confined to the industrial world, let me offer you an explanation of the continuing popularity of artificial intelligence. You would expect people to feel threatened by the “giant brains or machines that think”. In fact, the frightening computer becomes less frightening if it is used only to simulate a familiar noncomputer. I am sure that this explanation will remain controversial for quite some time for artificial intelligence, as a mimicking of the human mind, prefers to view itself as being at the front line, whereas my explanation relegates it to the rearguard. (The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better.

“The question of whether computers can think is like the question of whether submarines can swim”


you should read "fluid concepts and creative analogies". There was some good headway made in this direction that got lost.


>Fair enough, but I wonder if the best use of AI research is to get computers to solve problems that a person could easily solve.

I don't know about "best", but solving problems that a person can solve will increase the standard of living and create immense economic growth, because solving problems that only humans can solve means that more human work can be automated.


Will it really increase overall standards of living? We already have problems with there not being enough jobs to go around, automating more human work will just make that worse.


That's called ludditism.


I see this a lot online: pointing out that a concept has a name is not a refutation.


Also they got the name wrong. Not to mention the definition.


Even though people can do it easily, it automates it. You don't have to actually hire a personal assistant to follow you around and do stuff for you when you can just have an AI on your phone.

And then there is doing stuff that isn't easy for humans that requires at least the same level of intelligence. Like engineering things or writing computer programs.


Solving questions a human could easily solve is a first step, don't you think?


From a robotics standpoint, there are lots of problems that humans find easy in the general case, but might still be hard for humans because of context (e.g. deep within a wrecked nuclear plant, on Mars, etc).


My thoughts about [artificial] intelligence.

Senses do not provide the data for intelligence. Their sole purpose is to provide syncing between reality and internal representation. Intelligence perceives and operates on this internal representation only.

Language is just another sense that helps with syncing of internal representation with some sort reality (might be physical reality, might be reality of social relationships, might even be internal representation of another intelligent being).

I've seen logs of conversations between AI and human, and human and human. What struck me how much more verbose was the AI and human talking to it. Humans between themselves used such shorthands that made the conversation almost unintelligible to me just because I'm not american or even native english speaker.

What useful AI systems up to date lack is the right internal representation. Current interaction between real world and AI systems resemble interaction between an application components and unit testing mockups. Interface is sort of correct but mockups lack the actual meat inside.

AI research should IMHO go towards simulating reality, physical and social in flexible enough way and finding ways of syncing this simulation to reality using narrow data channels such as language, heavily filtered out (moving) images and sounds.


I disagree that an intelligence operates on a distinct internal representations of things. I believe that our minds mostly operate on our memories of sensory data. When I remember a song in my head, I'm not iterating through an internal representation, I'm remembering my sensory experience of it. There is an important component of syncing internal representation when communicating, but I think that internal representation is mostly just recordings of sensory information; its not fundamentally different than what we sensed at some point in the past.


Memories don't work like that. Research on eye witness accounts indicate that people remember very little. They fill in the gaps with information that they acquired after the event, and what's more important for my point, with the information reasoned from the things they actually remember. And they are not aware that they filled anything in. They remember things they made up exactly same way as the actual memories of the event.

Each time you remember something it's being reconstructed, re-imagined by your brain. Method of treating people with stressful memories is based on that. You allow them to remember the things that bother them in safe, positive environment and after few such recalls, large part of stress associated with this memory disappears because reading memory is sort of "destructive read" and brain to keep that memory has to write it again after read but it's not exactly the same as it was read.

When you look at what's in front of you, you don't see the whole image even though you feel that way. You see only the thing you are focused on (inattentional blindness). The rest is filled in for you. You might argue that this comes directly from previous sensory input. I think you could design experiment where you show a person a misleading image, while influencing brain with TMS to interpret wrong way, focus attention of a person on some part other then the misleading one and then stop influencing the brain so that it could analyze the previously recorded sensory input properly. I'd bet that person would still see the rest of the image as he was seeing when his brain was influenced until you allowed him to focus attention on the misleading parts. Only then he'd be able to readjust.

Perception of illusions also indicate existence of internal representation for me. With http://en.wikipedia.org/wiki/Spinning_Dancer illusion you can feel the moment when your brain switches between two completely different representations of ambiguous image.

When you hear sounds, what you hear and remember is not just sensory input. What you hear depends on the context. Awesome demonstration of that effect: http://www.youtube.com/watch?v=8T_jwq9ph8k&feature=player_de... When you hear the song backward for the first time you hear nothing. But when you see the text you were supposed to hear you can actually hear those words. I think that just presenting you with the text and telling you to remember if you heard this text the first time wouldn't change what you remember hearing. Try to pause when the text for reverse version shows up and see if your memory of reversed song playing matches any of that text.

I think this effect happens because senses are very thin, and sensory input is aggressively used to construct internal representation (with much of the representation created from experience) and then discarded.


I don't disagree with any of your examples, but I would interpret them differently. There is certainly a fair amount of "extrapolating" going on subconsciously. Our brains attempt to extract higher level meaning from sensory input (such as rotation or relative size of objects). This is a sort of knowledge that is based on the totality of sensory input received up until that point (i.e. the experience that a silhouette is likely a 3D object that is spinning in a particular direction). But I don't consider this knowledge as being distinct from the sensory input itself, rather an abstraction over a set of similar inputs that give it meaning.

Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close. The stuff of mental computation to an extent is our memories of sensory inputs themselves, or abstractions over similar classes of inputs.

Thinking about it further, our ideas may not be as far apart as they seem.

Do you consider the "sensory input" as, say, the light waves hitting the retina, or the set of neural states triggered that induces a "qualia" experience of sight? In my explanation I was considering the qualia as the sensory input rather than the frequencies of light. Perhaps you're using the other definition?


> Do you consider the "sensory input" as, say, the light waves hitting the retina, or the set of neural states triggered that induces a "qualia" experience of sight?

I consider sensory input everything from retina up to a point when you become aware that the horse just passed you.

I think that only this high level information gets stored and is used for all intellectual activity. Actual sight, sound and smell of a horse is just stored to the extend that allows to recognize horses better in the future but it's not the part of any reasoning you might have later of why the horse was there, where was it going and whether it would be cool to own a horse. You use abstract representation of a horse for all those thoughts.

> Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close.

You feel that but if you tried to draw or sculpt a horse you'd see how many pieces you thought you recalled you actually made up or have no idea of how they really look. If I'm not mistaken you admit that the horse you try to imagine gets rebuilt from bits and pieces that are stitched together. In my opinion foundation of that construct is that internal abstract representation of a horse concept.

> (i.e. the experience that a silhouette is likely a 3D object that is spinning in a particular direction)

In my opinion brain doesn't switch between spin right, spin left, but between, this person is slightly above me, this person slightly is below me. Change in the direction of rotation is just what tells you very clearly that your brain just switched. Not only perception of a 3d object changes but the whole scene, relation between observer and the object.


>If I'm not mistaken you admit that the horse you try to imagine gets rebuilt from bits and pieces that are stitched together. In my opinion foundation of that construct is that internal abstract representation of a horse concept.

The way I imagine this works is that our sensory input fires some particular set of neurons which accounts for our sensory experience of the horse. When we recall a mental image of a particular horse, our brain attempts to recreate as best it can the neural firing pattern from the actual sensory input. Of course, this pattern gets distorted as we do not remember specific images as a whole (unless one has a photographic memory), but pieces of images that represent certain abstractions over portions of a subject. These patterns are recreated by firing certain "bootstrap" neurons (memories units) that downstream cause the recreated pattern.

Expanding on this further, I can imagine our image-storage system being something like a many-dimensional quadtree, except instead of just spacial dimensions it also extracts colors, shapes, patterns, textures, etc. So different meaningful concepts are stored in different layers of the neural network, and some approximation to the original can be recreated on demand. This can certainly be considered an abstract representation, yet it is still tied to and semantically similar to a raw 2d mapping of the image. The difference is mainly storage efficiency due to compressing similar concepts learned from our experiences.


Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess.

This is very true. But I think that recent AI experts (as opposed to those doing this work in the 70s) have realized that trying to tackle linguistic analysis is very very (very) hard. The problem with language (or more correctly, discourse) analysis is that even outside the realm of computing, it still hasn't been fully explicated.

A couple of months ago I took a graduate philosophy of language seminar (taught by the brilliant Sam Cumming at UCLA) in which we looked at various theories of discourse. It would be an understatement to say that these theories vary wildly. We have the classical RST (Rhetorical Structure Theory) by Mann and Thompson[0] (renowned linguists at USC & UCSB), Jan Van Kuppevelt's erotetic model[1], Andrew Kehler's Theory of Grammar[2] and a half-dozen or so more that I don't even remember.

So let's forget about computers for a second. We don't even know how humans process discourse. My term paper was about the parallel relation which is a very talked-about topic (almost as much as anaphora; see The New Yorker article) in the academic community; not only are such linguistic phenomena difficult to theoretically model, they are nigh impossible to practically implement (say, in some sort of AI schema).

So I'm not even surprised most AI folks just started doing work on SVM's or ANN's, or Markov Chains, or what-have-you. It seems more practical to do work on stuff that could actually benefit from machine learning, as opposed to trying to solve incredibly difficult (and mostly theoretical) problems like discourse analysis.

The bottom line is that we're still a ways off from having computers like those in Star Trek - computers that understand anaphora, parallelism, ellipses, etc, etc.

[0] http://www.sfu.ca/rst/

[1] http://www.jstor.org/stable/4176301

[2] http://www.amazon.com/Coherence-Reference-Theory-Grammar-And...


Thanks for your post - very specific and informative.

I believe that it will not be necessary to cope with the corner cases of discourse for a machine to be regarded as intelligent, and I believe that machines that can cope with those corner cases (if realisable ever) may not be regarded as intelligent (I mean, doing this is not sufficient).

I think that in this sense discourse analysis is rather like computer chess - a goal that if attained may seem rather irrelevant to the grander challenge of creating machines that are capable of significant contributions to our culture.

I think that we risk attacking the problems that appear to us to be interesting.


>> So let's forget about computers for a second. We don't even know how humans process discourse...

>> It seems more practical to do work on stuff that could actually benefit from machine learning, as opposed to trying to solve incredibly difficult (and mostly theoretical) problems like discourse analysis.

The thing I am not sure about is how much it makes sense to attempt to create discourse analyses vs doing more "practical" work. The reason is that a complete and accurate discourse analysis seems nearly impossible. Having read through a few reference grammars and phonological analysis of various languages myself, it seems any rules linguists create are always wrought with copious exceptions and special cases (though certainly not all! There are examples of phonological rules that are very very regular across languages). Some of these analyses were extremely intricate.

I got the impression that many of them suffered from the mistake of trying to prescribe patterns to data that does not have any patterns.


I completely agree. Having worked with some AI systems in the past (mostly ANNs although I'd love to even begin to understand SVMs), I think that there's more "practicality" in just building robots/software that (based on various pattern analyses/etc/etc) figures out where a malignant tumor is, for example, rather than a robot that can understand coherence relations or what-have-you.

So I don't fault the CS community from stepping away from linguistic puzzles and getting on with more useful stuff. After all, that's what engineering is. It looks like Levesque disagrees, but I'm not sure he's right.


SVMs are lickety-split simple. You're drawing the best line/plane/hyperplane between some points. You can turn this into a nice convex optimization program, given some conditions. If this thing isn't fully separable, you can fudge it a little with some penalty terms, or you can cheaply project these points into some space where such a separation does exist. The hard part will always be your feature extraction, labeled data collection, and the parameter tuning for everything I just waved my hand at.


I wonder if we could make a computer that could understand a simple and made up language like lojban. The grammar is consistent and well defined, the hard part would be trying to get it to learn what words actually mean.


  Can an alligator run the hundred-metre hurdles?
http://www.youtube.com/watch?v=pjslsKZYXQ8 (Can alligators jump and climb? Yes, they can!)

http://www.animalquestions.org/reptiles/alligators/can-allig... (many have thought was a myth, it is true that alligators are in fact able to climb fences.)

These anaphora questions are better solved through natural language parsing, for example turning them into predicate logic. The article is right when it says that big data algo's have a problem with these sentences. But that is not to say they can't help. In the article:

  The town councillors refused to give the angry 
  demonstrators a permit because they feared violence. Who
  feared violence?
You can use big data to calculate the semantic closeness between "town council" and "fear violence". If that is closer than between "demonstrators" and "fear violence" you can make a good guess.

From the Wikipedia article on anaphora:

  We gave the bananas to the monkeys because they were hungry.
  We gave the bananas to the monkeys because they were ripe.
  We gave the bananas to the monkeys because they were here. 
A quick Google search for "bananas were ripe", "monkeys were ripe", "monkeys were hungry" and "bananas were hungry" and counting the results will solve this.

Using semantic closeness can also work against you in the case of ambiguity:

  The robber sits on the bank.
Is "bank" a furniture here? A money bank? A river bank? Semantic closeness might picture the robber sitting on a money bank. A second or third pass is necessary: calculate the chance that a person will sit on a building vs. the chance that a person will sit on furniture.

This is a decades-old hard problem that is close to philosophy. Is the robot inside Searle's Chinese room intelligent? Is it really understanding what it is saying? Or is it "faking" it?


> You can calculate semantic closeness between "town council" and "fear violence". If that is closer than between "demonstrators" and "fear violence" you can make a good guess.

Only for that particular example. It's easy to come up with examples where that heuristic fails:

"The town council denied the protestors a permit because they were in a bad mood."

"The factory failed to produce the car on time because it was undergoing maintenance."


You're sort of validating the technique: in those examples the meaning of the sentence is absolutely ambiguous. The fact that a google search wouldn't return a conclusive result is expected.


Not for the second one; "car undergoing maintenance" will probably return more results than "factory undergoing maintenance", but one parsing of the sentence (the one where the factory is being maintained) makes far more sense than the other.


But at the same time, people use sentences like this very frequently and there seems to be some sort of heuristic we are using to disambiguate it.

It's not the case that every time an ambiguous sentence is used in real life people will stop and consider the possible cases. Granted, sometimes we will make the wrong assumption on how to disambiguate sentences that are absolutely ambiguous.

Another example where semantic closeness won't really help you (though context/situation would help):

"I looked at the man with the telescope."

Case 1: I looked at the man using the telescope.

Case 2: I looked at the man who was holding the telescope.


Actually I don't really get why anaphora are considered so hard:

the difference between We gave the bananas to the monkeys because they were hungry.

and

  We gave the bananas to the monkeys because they were ripe.
is simply to check in what you know about the world. Monkeys are animals, which can be hungry. Bananas are fruits which can be ripe. The third example is ambiguous because both animals and fruits can be 'here'.

Similarly, out of all the meanings of 'bank', there's only one that can be sat on.

Consider me naive, but it shouldn't be too hard to build a program backed by nothing more than an SQL database that can handle these.

And consider me SUPER naive, but that's exactly what I am doing at the moment in my spare time.


Excellent point made here.

The idea that pops into my mind is that whenever we have speech output from computers to humans, the computer should structure the sentences to minimize ambiguity (while remaining efficient). It would be unfortunate indeed if a human misunderstood a computer's instructions or warnings due to ambiguity on the part of the machine.


This is a very simplistic way of looking at the problem. To say that anaphora, ellipsis, parallelism, and the dozen or so other coherence relations are simply "solved through natural language parsing" is just incorrect (and probably meaningless).


I didn't say they were simply solved through natural language parsing. I said they were better solved (and that combining it with other approaches might help).


Interesting that there seems to be no mention of Cyc, which is explicitly intended to be good at problems like those mentioned.


I agree, that is very curious. I clicked through to Levesque's paper, and there is no mention of Cyc there either.

I gather Cyc has long been considered a failure by many in the AI community, and perhaps even an embarrassment by some. Still it is hard not to read Levesque's paper as a vigorous endorsement of the goals of the Cyc project, if perhaps not of its exact methods -- see sec. 4.4. I think it is incumbent on Levesque to explain what Cyc has done wrong, if he has a good theory about that -- and if he doesn't, he needs to admit it, because Cyc's ostensible failure makes it seem unlikely that any symbolic, reasoning-oriented approach to AGI could ever succeed.


If Cyc has failed then why do there seem to be so many people trying to build XML schemas that could be used to reimplement it.


Cyc-type approaches are useful for narrow, well-specified domains, just not AGI or actual "intelligence."


It has been a few years since I went through Ben Goertzel's team's AGI papers, but I find it difficult to believe that Cyc would not be useful for background information for real world knowledge. I would use Cyc, Wikipedia info boxes,etc.


But is it any good at those problems?


It's best to just let Cyc people continue to think they matter and let them stay busy expanding their knowledge base in the church of first order logic. It'll never go anywhere, but it's better to keep them all contained in one place rather than polluting other fields of study.


A merciful end to Cyc funding might be better. Monies saved could be spent elsewhere and termination would free all involved without the expectation that they produce something of value. Although it might cause some grudges ("If we'd only been allowed to add the last 2000 rules, I am certain it would have achieved consciousness!") everybody needs a new start now and then.

Had Lenat published working versions or even complete explanations of his previous software (AM, EURISKO) I might be a little more sympathetic. Cyc, the project that never ends, appears to be in a state of perpetual partial implementation, it's most notable product being a constant stream of money into the project.

But look on the bright side: perhaps I'm completely wrong, Cyc has actually achieved AI, and the NSA is using it right now!


Okay, that was way too nasty and out of line.


See, this is what I mean (see my reply to eschaton).

I take it then, seiji, that you also completely disagree with Levesque's program as expressed in section 4.4 of his paper?


Absolutely. 4.4 is a full endorsement of Cyc (and don't get me started on "Cyc" vs "OpenCyc"). Second 4.4 looks a little like "graphplan reborn" or "generic AI approaches from the 80s."

Discrete codified knowledge is not the stuff the universe is made of. The problem Cyc will never solve is the "a picture is worth a thousand words" problem. Describing everything in a relational, hierarchal, predicate calculus arrogantly ignores the unrelated multidimensionality of, well, everything.


I think we can infer, from Levesque's failure to even mention it, that the answer has to be no. I admit I am curious exactly what Cyc can and can't do, and why. I haven't looked at it closely, though.


What I've heard is that their system has gotten progressively better over time while their publishing had gotten worse.

People in academia can get ResearchCyc and see just what its capabilities are these days; it's basically the same as full Cyc. OpenCyc is a shadow of the real thing, supposedly.


This guy should look into research labeled as artificial general intelligence (AGI) or deep learning. If he really understood for example what Watson can do or the leading research using things like autoencoders or hierarchical temporary memory for natural language understanding then he would have a better informed and less pessimistic attitude.


The piece actually seems like a backhanded critique of the optimism shown by the deep learning community (which of course Winograd is aware of). It also seems like an appreciative backward glance at topics that interested Winograd earlier in his career, but are less fashionable now.


I've thought about this a lot, here's my best answer as of today.

Part 1 - Existence proofs

We think we can build 'intelligent' machines because human intelligence exists. We don't have a perfect idea of what it means to be intelligent, but we know, whatever we have exists so it's not impossible to create. Some disagree with this stance on philosophical or religious grounds, and others resort to pointing out even if it's possible it's really really hard. Fair enough.

Part 2 - Methods

Usually we start by picking the thing we think best typifies intelligence and we work to solve that. Chess, Written Language Comprehension, Visual Pattern Recognition, Spoken Language, etc. These are all called AI. All well and good, but so far we get exceptional single purpose systems (the narrower the domain the better) and then look around and say 'but that's not really intelligent.'

Lately people have started to think more about the process by which things become intelligent rather than just the end behavior. Somehow the machine should naturally transition to being intelligent as opposed to being explicitly programed with intelligence. Machine learning is so-hot-right-now because of this. But, again, in the end you get a well trained system that does whatever it used as it's error metric. Like differentiate cats and sailboats. "But that's not really intelligent because it can't [play mozart/read poetry/paint a picture]."

There are infinite criticisms, but they can be summed to a lack of 'generalness.'

Part 3 - Representation

What people searching for 'intelligence' are looking for is a system that can process data from at least as many sources in as least as many contexts as a human. The hard part there, and the one thing the brain does really really well, is being able to relate sight to sound, touch to taste, past to present, and present to future. In us there is a shared language of representation that encodes experience.

In AI so far it's an unusual system that tries to relate many senses, keep a life long memory, and work in a noisy and incomplete environment and constantly make predictions about what will happen next.

Part 4 - Data

It is an unusual AI researcher who has all the data they want. As computer people we are impatient, and so waiting 30 years for a robot to collect the 1.4 PB of visual information a human does by that age, or the 1.8 TB of audio information just isn't done. We use existing datasets that are computationally tractable (meaning you can run them in minutes, hours, or days).

And yet we do not have an existence proof that intelligence of the general human-like kind can exist without years of exposure to the world. It's reasonable to expect that it's possible, we just don't have pre-existing knowledge of that fact.

Part 5 - The Future

So how will we get there from here? We're probably going to have to do it the hard way. Create something that can sense the world in the ways that we can comprehend, and painstakingly rear it, collecting data, automating and hard-coding what we can, until we have a set of error metrics, motivations, data, and environment in which we begin to see the thousand different skills called 'intelligence' that humans take for granted. In short, we'll probably end up thinking a bit more like parents, and a bit less like computer scientists.


> In us there is a shared language of representation that encodes experience.

What you express in the quote above is a very "computationalist" view to AI. FWIW, in contrast, I'm more of a "connectionist" [1]. This means that when I look at the same incredible "general intelligence" of humans -- this almost "synesthetic" ability to abstract and connect ideas -- I do not see a powerful translation and symbolic reasoning machine with some seemingly magical universal language buried deep within. In fact, I believe there can be no such language without severely compromising the reasoning system's generality.

Instead, I see a powerful "connection machine", where concepts, thoughts, language, etc. are all the result of some incredibly versatile connective learning/creativity process. Of particular importance, I see this connection machine thrive and prosper within systems where there are no axioms, no ground truth, no single agreed-upon notion of what separates "this" from "that" (you'll see this in humans when studying philosophy). The analogy to this notion from a computationalist perspective would be a machine whose instruction set, or core reasoning language, is always in a state of flux.

I will freely admit though that one view does not necessarily have any more explanatory power than the other; they both rely on some "magic" unknown assumptions. For a connectionist, the "magic" is the connective learning algorithm. For a computationalist, the "magic" is in this universal language/symbolic system.

Disclaimer: I'm by no means an AI expert (still have a lot more to learn!), but I always enjoy thinking/discussing these topics and the surrounding philosophy.

[1] http://en.wikipedia.org/wiki/Connectionism


Connectionism is not a predictive theory. Rather it is the manifestation of a depressingly common fallacy in science: assigning a sacred mystery[1] to as-yet unexplained phenomenon.

How does your connectionist interconnected networks of simple units actually give rise to general AI? Answer that and you'll have the "shared language of representation" that the OP was talking about.

[1] http://lesswrong.com/lw/iv/the_futility_of_emergence/


I completely agree that "super magic emergent intelligence" is not an explanation, but a mystery. But I think it's worth noting that the same applies for this "super magic universal language of representation" -- it's not an explanation, it's a mystery.

It's also important to realize that these aren't beliefs, or truth claims, or scientific claims. They're philosophical perspectives; no more, no less. They might guide the intuition, but have no bearing on the science itself. Someone who doesn't clearly understand the distinction between the philosophy and science of a topic may definitely risk either contributing to the "depressingly common fallacy in science" you mention, or risk misinterpreting a philosophical argument for a scientific one, and hence through blurred vision believe they see fallacy when in fact there is none.

One way of looking at it is both views are different philosophical angles of the same thing (or at least the same problem/mystery). A connectionist sees this conception of a "shared language of representation" as assigning a sacred mystery (what does this language actually consists of, precisely?) to an as-yet unexplained phenomenon, in the same way a computationalist sees this of the connectionist's learning algorithm (how does this learning algorithm work, precisely?)

The reason I highlight this philosophical symmetry is to emphasize that these are merely different intuitive mindsets developed towards approaching the common mystery of general intelligence.

The bottom line is so long as human-like "general intelligence" is a mystery (to the extent that we can't replicate it 100%+ effectively in computers), it's going to be an "unexplained phenomenon", and thus any theories developed around it will have some "magic" hole somewhere -- some key element devoid of predictive power. (Because if there were no such hole, then by definition, we'd already have it all figured out.)


FWIW my preferred language of representation is sparse activity of neurons over time :)


most intelligent human brains have been programmed by their parents and education systems for more than 10 to 20 years. Even if we have an as powerful ANN, it still take years to train it into something intelligent. Computational way might be well ahead of that. Thus we still haven't taught ourselves how to program our own brain.


1. Why should ANN train at the same rate as humans do? It's not unreasonable to expect that it will learn million times faster than any human.

2. While every single human must go through the 10-20 years learning process, there are no reasons to duplicate that process for AI. Connection diagram and weight matrix could be easily copied between ANNs, so that they can constantly build on top of each other knowledge.


> a lack of 'generalness.'

General intelligence seems to be just about every technique people can think of, thrown into a giant feedback loop. Unsupervised learning supervising other supervised learning. Classifiers classifying different possible classifications. Modeling by projecting known models. And, it all happens without you. "I" stand on the shoulders of a giant, just slightly above sea level, with all the inner workings hidden below.


You might be interested in the Artificial General Intelligence Conference series: http://www.agi-conf.org/

See also the "artificial toddler": http://wiki.opencog.org/w/OpenCogPrime:Roadmap#Phase_2:_Arti...


I think we could help improve the field by making CAPTCHAs with these kinds of questions.


This is brilliant.

First you have to come up with a way to automatically generate these questions though. If we have to make them manually, the machines can just remember all the answers.

Creating the questions seems much harder than answering them.


Won't work. Assuming that the right answer is a word in the question, the computer can just brute force the answer in a couple of guesses.


If you fail 50% of guesses, that should flag you as a spammer.

If you ask a few questions with multiple possible answers it's pretty unlikely it would get them all correct through random guesses anyways.


The bot can use only 1 guess per website.


could you just change the number of questions to scale with the number of attempts? ie. if there are more than 3 attempts at answering a question, then ask 2 questions.


"The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Stryrofoam with steel.) a) The large ball b) The table"

And a very large styrofoam ball can crash a comparatively small table made of wood, all depends on relative definition of 'large' right?

I think the problem is that everybody can pick his favorite feature and define it as the key of 'general intelligence'. Some say Anaphora, some say machine learning; I like Hofstaedter he says that it is all about analogy. http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...

Also: The problem is that these statements can't be proven; it is all about opinions and dogmas; I think the argument is s question of power: the one who wins the argument has the power over huge DARPA funds, or whoever else gives out grants for this type of research. The run after defining problems (expert systems, big data) in AI might have something to do with the funding problem.

The 'Society of Mind' argument says that there are many agents that together somehow miraculously create intelligence. http://en.wikipedia.org/wiki/Society_of_Mind This argument sounds good, but it makes it hard to search for general patterns/universal explanations of intelligence.

On the one hand they have to focus on some real solvable problem, on the other hand that makes it very hard to ask and find answers to general questions; I don't know if there will be some solution to this dilemma.


Maybe western civilization is not very good at answering this type of general big questions, maybe Indian civilization has a better chance, after all they invented structural linguistics some 2500 years ago (so that was not Chomsky at all ;-)

http://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini

Maybe the problem needs and idle class of Brahmi who can ask questions and ponder about them without end, without having to worry about questions of funding ?


Didn't Google recently roll out what they called conversational search which did the pronoun resolution thing?

The example I recall they were able to answer correctly was 'Who's the president of the USA?' And then 'How tall is he?'


I wanted to arrage for watson to go on "are you smarter than a 5th grader" and give it all sorts of questions like these (but even easier).

For example:

"Think of a giant flightless bird. Now, name a color that starts with the same first letter."


Eggplant. Or blue.


The article concentrates on the fact that current artificial intelligence (AI) software doesn't really understand the meaning in natural language. Right.

So, then, maybe one approach to AI is via natural language -- program a computer to understand natural language, just typed input should be sufficient initially.

How to do that? My two kitty cats understand some natural language, and I have to suspect that I roughly understand how they did that and how I could program some of it.

Human babies learn natural language, and we have to suspect that the effort is a bootstrap where learn some really simple things -- e.g., "Ma Ma" and "Da Da" -- and then build on those. "Nice". "Bad". "Ma Ma nice." "Da Da bad". "Food". "Hungry". "Hungry want food.".

"When hungry and want food, go to master, reach up with front paw and use claws to pull on shirt but don't pull on skin." -- my kitty cats both already figured this out either independently or learned from each other.

"I can see." "Can he see me?". Kitty cats know that very well, and if they want to scratch or bite (one cat long ago, just rescued), know to wait until the target can't see the claw or mouth about to bite.

So, to come in from the back porch, wait until there is noise indicating that I'm at the kitchen sink and take a position on the porch so that can be seen -- then I will let them back in.

Then build on such simple things.

That's what I thought long ago. Once I asked DARPA about it, and they had no response.

The author of the OP has another article on how birds and babies learn to understand language. So, maybe more than one person is thinking along those lines.

Doing it first with just text input should show the core problems and be sufficient.

One problem: Kitty cats have great internal 3-D geometry. E.g., if the mouse runs clockwise around a packing box, then the cat can be smart enough to run counterclockwise. So, the cat understands the 3-D box and paths in space. They're not stupid you know! How to program that? Hmm ...!


How did you "ask DARPA"?


Found an appropriate DARPA problem sponsor and sent him e-mail.


In the first lecture of the Artificial Intelligence course at UC Berkeley, one slide says: "A better title for this course would be: 'Computational Rationality'"


I think the ultimate test (and application) for artificial intelligence will be to tell the computer "Make me money" and then it figures out thousands of ways to do it and starts executing on these strategies. Potentially very dangerous outcomes however without morality.

Or another initial application of artificial intelligence I think is in trading financial markets, and distilling every point of data to create models of predicting markets and making obscene amounts of money.


And an intelligent answer would be "go find a job" :) Or give you a motivational quote from Tumblr :)


Levesque works in knowledge representation, and all the sample questions hinge on some shallow inference from default knowledge. It's understandable to think your own field of research is central -- and it was, in the 80s. What makes it especially productive to focus on now? That's what I'd like to have seen covered here.

The Turing test seems a red herring, since afaik it's not a big part of research evaluation currently.


This whole article, and [I think] Levesque's IJCAI article, seems to think that artificial intelligence is just about language.


Its not that its just about language, but I do think that a system that could pass the turing test would necessarily be AGI. To put it another way, to fully solve the NLP problem one would have to have created AGI. To pass the turing test a system would need to simulate or acquire the knowledge of an entire life of experiences of a person and be able to convincingly converse about those experiences with an actual person. This would undoubtedly be AGI.


1. I think "AGI" is a ridiculous recent rebranding of "Hard AI" and doesn't represent any meaningful scientific or engineering pursuit.

2. Even if one accepted that NLP is AI-complete: this is a sufficient condition and not a necessary condition. But the claim being made in the articles is that it is a necessary condition: that is, that other AI disciplines are distractions because they do not lead to the AI grail that NLP supposedly leads to. This is classic GOFAI hogwash.


Hard AI -> Strong AI. Brain not firing on all four cylinders.


Anyone interested enough in building an AGI (Artificial General Intelligence) system? I've always been curious about this field, but never done anything about it or have any experience. Nevertheless, if there are any hackers who are into AI, curious or want to build such systems. Contact me, let's explore the possibilities.


isnt that criticism thr exact point chomsky was trying to make in his proxy discussion with norvig from google ? i found the exemple given in the article excellent though. it's another reason why computers won't be able to solve grammar mistakes 100% correctly (at least in latin languages).


Chomsky doesn't think we should be trying to observe how people use language on a mass scale, so screw him imo. See: http://languagelog.ldc.upenn.edu/nll/?p=3180 "The other argument has to do with the methods of science: Chomsky argues for "very intricate experiments that are radically abstracted from natural conditions". "


What, no actual experts are going to comment in here? I was looking forward to the inevitable flamewars.


any thoughts here on the parallel terraced scan, as implemented by Hofstadter in "fluid concepts and creative analogies"?


So who is going to make out real AI Algo which is Artificial by nature and Intelligent by working ?


I have thought about this long and hard. There was a time in my life when I thought I could create something that had intelligence. Here are some of my thoughts. Intelligence isn't something that is instant, it is something that is acquired. To acquire intelligence, the being must have an environment in which to interact and learn. My idea was to mimic the way that animals and plants become adaptive to their environment. They all can die, and only the strong survive. So, each being, or thread, would have the ability to die. Well, what happens if they all die? Then the experiment would be over and that wouldn't be fun would it. So, threads would also have to have the ability to reproduce. And, what good would it be if all the threads were the same? They must be able to mutate. What about intelligence? Forget intelligence, it just represents the ability to survive. All the threads will die off that can't survive anyways, so what we are left with is a pool of survivors. Uh oh. The environment exists in a virtual realm. So my threads wont be very beneficial in the real world realm that you and I live in. We need eyes and ears into the real world. OK, no problem, we will purchase cameras to capture light, and microphones for sound. Now we have a money issue. I may be able to afford a couple hundred micro controllers, microphones, and cameras. Not bad for a little expirement, but wait "Scientists estimate that there are one quadrillion (1,000,000,000,000,000) ants living on the earth at any given time." according to hypertextbook.com http://hypertextbook.com/facts/2003/AlisonOngvorapong.shtml. How can my intelligence be smarter than an ant, if there are this many ants, who are trying to survive, and have the ability to reproduce and mutate? I don't think I can. I began thinking about this more and more. We would have to speed up the evolution of our intelligent being. What are different things we could do to speed it up? Perhaps we could create more mutations, but not too many, we don't wont our bot to become extinct. Oh crap, how is it going to reproduce? I totally forgot about that. Each of our species would have to have an electronics factory built inside it. I know the solution, don't try to create your own intelligence. You can create as much ARTIFICIAL intelligence like search engines all you want, but this is not intelligence in any way shape of form. It looks like intelligence, but it is not. To achieve truly something more intelligent than a human, you will need to take what nature has already created and mutated, and then provide environment enhancements(schools?). There I said it. Schools and all theses GNC drugs that are supposed to make your brain function at an increasing rate. I am afraid we are already trying to become more intelligent every single day in the world. And yes, it is working, but we can only work as fast as nature allows us. On a closing note, Computers will never be able to think like a human, because they cannot reproduce. Nature is much more efficient.


What you describe is exactly the same as genetic algorithms (also genetic programming) or artificial life (Alife.) If you are interested there is a lot of information about these things available on the internet and plenty of implementations you can experiment with. And it does work to do amazing things. The problem is evolution is slow. Incredibly slow. It just tries random things, it takes awhile before it finds something that actually works. You say that nature is more efficient, but it's not. Not at all. It's simply had way more time than humans, and it has population sizes in the billions (way more for some species.)

But with the same amount of resources, human engineers are way better at designing things than evolution. If you give humans enough time, we can figure out how to make intelligence, and we can probably do a way better job than nature. At worst we simply need to reverse engineer how nature did it, at best we find an even better way.


Computers will never be able to think like a human, because they cannot reproduce. Nature is much more efficient.

Bzz. Wrong.

I'm curious -- what's your background? Also: try using paragraphs.


...Not to mention what they're putting in our water: http://www.youtube.com/watch?v=_c6HsiixFS8




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: