Hacker News new | past | comments | ask | show | jobs | submit login

This is an excellent article for raising awareness of AI. However, I disagree with its conclusion that we will always relate to AI as tools, pets, advisers, etc.

When we ask whether machines can think, I believe it is a question of volition and self-direction. Does the machine have an open-ended goal and is self aware? Can it alter its own programming to change its own methods, its own knowledge, and its own goals? If the answer to these two is "yes," then I believe we will consider the machine to be thinking.

What are the implications? If machines gain those capacities, as well as the ability to compute emotions, then what separates them from us? If you agree that humanity is fundamentally a function of our minds, then, if machines can compute in fundamentally equivalent ways, they become "human." What evidence is there that this is forever impossible?




If you agree that humanity is fundamentally a function of our minds

What about the ability to feel pleasure and pain ?

I don't believe a deterministic machine will ever experience such subjective feelings (though it may behave as if it does). Computers may well become sapient but they (at least with current technology) will never be sentient.


Thought experiment: If there was a human that suffered an injury that left them unable to feel pleasure, or pain - some form of nerve damage, perhaps - I wouldn't think any less of that human's mental function. I wouldn't think they had lost any humanity, or argue they should be accorded any less human rights. I think that they'd still have hopes, and fears, would be much more important than their ability to feel pain or pleasure.

Given this, how important is the ability to feel pleasure, or pain, really?

I think you are focusing on the wrong thing, and disagree with this way of assessing humanity.


I used pleasure and pain as examples of subjective experiences. Hopes and fears are other examples.

If you believe that humanity amounts to more than merely electrical signals and other physical phenomena (ie you're not a reductionist) then I'm surprised it's my side of the argument you're disagreeing with.


I disagree, but I do find it a very fascinating subject area. I would highly recommend reading through Valentino's "Vehicles"(http://www.amazon.com/Vehicles-Experiments-Psychology-Valent...). It's incredibly short, thought provoking, and somewhat humorous to boot.

He goes over building very simple machines that appear to have emotions, hopes, plans, etc., and goes over they exact topic you're talking about.


It is hard to have this conversation on hacker news.

The format works well for general discussion, but this is a particularly subtle topic - its hard to make conversational progress before several essays back and forth to pin down a set of agreed terminology.

This is something I'm struggling with here. I don't want to get bogged down in a debate on good semantics, but without clarification its hard to have any meaningful discussion. Sometimes you need to spend some time discussing semantics, in order to move past them.

I need a word to describe the sort of higher level awareness that we humans have in and of ourselves, that makes it not ok to 'switch off' another human. Normally, I'd say sapience, after 'wisdom' - but you are clearly using that in a different meaning, so I'll go with 'personhood', without limiting that necessarily to homo sapiens.

While we still know relatively little about personhood, I would be happier to use hopes and fears as a defining criteria, over pleasure and pain.

Pleasure and pain are very low-level sensory phenomena. Very simple life can feel pleasure and pain - pain particularly; to my mind, they are neither necessary, nor sufficient, criteria for personhood.

Hopes are fears are, I think, the product of a much higher level reasoning process. For a start, they are not merely short term and reactive; they pre-suppose some model of the future. Further, I think they generally necessitate a model of self, and an awareness of self in that future.

That makes them very different criteria, in general.

If you were arguing that 'subjective experiences' were necessary for personhood, it'd have been better to state that explicitly.

The post you were responding, when you mentioned pleasure and pain, said that 'humanity was fundamentally a function of our minds'. I agree with that post.

If you require pleasure and pain - which clearly require some level of situatedness in a sensory apparatus which can communicate these sensations - for your definition of personhood, then I can see how you would argue against a machine 'simulating' personhood.

However, with hopes and fears, no such situatedness is required. I see no reason why a machine couldn't potentially simulate/execute a program/consciousness that had hopes and fears, even without a body, or conventional sensory input.

I see no reason why a simulated/executed (it is very important to realise that these two verbs have the same effective result in this context) program couldn't have subjective experience.

>If you believe that humanity amounts to more than merely electrical signals and other physical phenomena (ie you're not a reductionist) then I'm surprised it's my side of the argument you're disagreeing with.

So many difficult (loaded?) words there: 'amounts', 'merely', 'reductionist', 'side'.

I believe humanity amounts to more than merely electrical signals, in the same way I believe the mona lisa 'amounts' to more than 'merely' a collection of oils on a poplar.

Join the technological revolution: its not the atoms that are important - its the bits. The pattern things are in is much more important than the things themselves. A quicksort algorithm running on vaccum tubes, or transistors, or water pipes, or CPU simulated in minecraft, or rocks in an XKCD desert, is still a quicksort algorithm.

Its not 'merely' a collection of electrical signals. Its become more than that, because the electrical signals are arranged in a certain pattern. But its still fully explainable in terms of electrical signals.

>(ie you're not a reductionist) I do believe our minds are defined and explained (in theory) by physical phenomena, like everything else in the universe.

I'm not a supernaturalist. How about you?


I'll try to explain my point of view on this to the extent that I myself understand it.

First to me what you call "personhood", which you define as that which makes it wrong to kill (or I would assume as well cause suffering to) another human being, is based on the capacity of the being in question to experience suffering. I think that humans have a much higher capacity for suffering than most (if not all) other terrestrial animals due to their more complex nervous systems. Part of this increased capacity may be due to increased cognitive capacity per se (for example I will suffer if I think my life is in danger whereas an animal may not be able to conceptualize that danger and hence will not experience suffering in the same situation). Humans clearly have greater sapience than animals and this can lead to greater suffering. It is also possible that humans have a greater intrinsic capacity for subjective experience independently of any cognitive process. If this is true I would say that humans have greater sentience than animals. I don't know if it's true because I lack a conceptual model for understanding the nature of sentience and what physical processes may give rise to it.

In the absence of such a model, I can only guess and my guess is that sentience somehow arises from the interaction between information and matter.

I think it's likely that deterministic computers will achieve human level cognitive abilities in the relatively near future. I know of no cognitive function that humans perform which I can't imagine being executed on a sufficiently powerful computer. I would consider such machines to be sapient but as far as I can see they would have no sentience whatsoever (not even as much as lower animals).

As for your last question:

It seems to me that human beings are constantly falling into the same intellectual trap. That trap consists of believing that their scientific theories are not only accurate but also complete. At the end of the 19th century physicists believed they knew essentially everything there was to know about physics (except for a couple of pesky anomalies that could easily be swept under the rug). Ten years ago it was common wisdom that the majority of human DNA was junk which served no purpose. Only now are we beginning to understand the role that "junk" plays in regulating gene expression.

I think that current scientific theories are not capable of explaining the nature of sentience or of predicting which physical systems will be sentient and which not. I'm not sure if that makes me a supernaturalist or not.


How would you tell the difference between a human-level AI and a human? Sentience is not a well defined term. The best thing you can do is to ask the intelligence in question whether has subjective experience, if it says yes you have to believe that. I couldn't prove to you that I have subjective experiences. I think it's irrational to say that every cognitive ability a human has can be simulated perfectly fine on a computer, but only a biological brain is capable of subjective experiences.


Thanks for the detailed response.

>It is also possible that humans have a greater intrinsic capacity for subjective experience independently of any cognitive process.

I must disagree with you here - I would say that subjective experience is a cognitive process.

Of course, this is a speculative opinion; but it seems likely to me:

We know that we can have simple algorithmic and statistical processes on neural nets; we know we can build very complex processes from simple building blocks; we know the brain is full of neural nets; we know that the high level cognitive processes are build on the neural nets; and that if we interfere with the brain, we disturb the personhood.

To me, this seems like a pretty good case for applying occams razor and deciding that the personhood comes from the (big, incredibly complex) neural nets.

Loosely, I believe that if we could write a sufficiently powerful piece of software, and if we constructed it so that that was capable of complex introspection, then it would say it had a subjective experience. And I would tend to believe it. Such a feat is clearly some distance from the current state of the art and we have no idea how to do this - but still.

>I think that current scientific theories are not capable of explaining the nature of sentience or of predicting which physical systems will be sentient and which not. I'm not sure if that makes me a supernaturalist or not.

This is tricky territory.

Its reasonable to believe our scientific theories are incomplete. Certainly, we have a long way to go, scientifically, before we understand how minds operate.

But, lets look at our physics knowledge. Our physics models seem to be pretty good. Unless we are at very high energies, they seem to do a very accurate job of describing the world. We've built components with that knowledge, that have been reliable; our abstractions do not seem very leaky.

It is important to realise, that while relativity did come along and wipe away aspects of the newtonian worldview, newtonian physics is still accurate for deciding where to throw the ball to get it into the net. The more detailed physics was only relevant when working with certain situations that demanded the lower level of abstraction.

I don't believe nature evolved brains that require subtle properties of high energy physics to work. I'd say the systems underlying them aren't even quantum in nature. Some scientists would dispute this - notably Roger Penrose.

But is evolution really going to build brains, and processing software - which are present, in some form, in a vast range of species - out of some mysterious quantum - or lower level - physics?

Evolution tends to be about bootstrapping. Higher level systems are always assembled out of lower level building blocks. Thats why I have cells, tissues, organs - motifs and structure that repeat themselves.

If you evolve a system that is sensitive to the smallest changes in the properties of its smallest units (quantum interactions, say) its much hard, in terms of the search required. It also is going to be much more fragile.

Even neurons are a pretty high level construct, from a physics point of view; high enough so that our current physics probably models their functional characteristics completely. (Although we have yet to fully apply our current physics to analyse them completely; ANNs as we see them in AI are but the crudest approximations, and shouldnt be considered descriptive.).

So I would tend to believe that we don't need any new physics to understand how minds work.

If you are saying that there is some strange new physics needed to describe how the information relates to matter, then I think you are, functionally, a supernaturalist. I also put Penrose in that box, though, so its a personal opinion, and the jury is still out.

If you think that there is a lot of science yet to be discovered, to come up with theories of how computational building blocks are made from neural architecture, and what the 'algorithms' that run on that architecture are, then I agree with you.


I am glad that we can have this discussion, despite its difficulty. It is an intensely interesting subject, and I can't resist.

For the poster below, thank you for bringing up the p-zombie argument. Indeed, with links to deconstructionism and physicalism, how can you even tell that another human is "human"? Can you prove that that individual is conscious, has subjective experience, and has a soul? Or is it, in fact, an assumption that all humans possess these things? This fundamental assumption is put to the test if a computer can perfectly simulate a human mind. Perhaps we will see that such notions are purely subjective constructs, like "cold". Though we may feel "cold", it is a sensation and an experience. That experience is not provable, even if we might get everyone to agree that -50C is "cold." All we can prove is the behavior of the brain, and assume that that behavior is linked to experience in general.


You are basically arguing about P-Zombies[1]. I think that line of argument is fallacious.

What if the computer were powerful enough to perfectly simulate the workings of a human brain? Does that brain not have a consciousness?

[1] http://en.wikipedia.org/wiki/P-Zombie


Well first I don't think a computer (at least a deterministic one) will ever be powerful enough to "perfectly simulate the workings of the human brain". That would be a quantum N-body simulation with N > 10^23. There probably isn't enough matter/energy in the universe to do that.

When people talk of simulating the human brain they mean something along the lines of : if we assume that the function of the human brain is entirely determined by the transmission of electrophysiological signals between neurons then we will be able to simulate that behavior in the not too distant future.

Personally I think we'll have general AI's, machines which can pass the Turing test, well before such simulations have been developed.

In either case I can't see how a CPU could become sentient just because of the software it's running. Think about it. Do you worry that you're CPU might be suffering when you drive it hard ? If the simple fact of running the right software can allow a piece of silicon to feel pain then that's something that my model of physical reality can't account for.


Do you worry that you've died and a new, unrelated consciousness born every time a cosmic ray pops a single electron in your brain up to a higher orbit? It seems obvious from the amount of mechanical or chemical intervention that's required to create a detectable change in consciousness that the human substrate for consciousness only matters at a far more macroscopic level than the quantum.

Also, I think that the amount of worry you experience over whether a computational substrate running software designed without consciousness in mind can suffer is a very, very, very poor proxy for the question of whether machine sentience is possible. Think about it. If you did have a magical hypercomputer, and a program running on it which ran a perfect quantum simulation of a human baby in unbearable pain, would you remain completely blasé?


A superconductor doesn't stop being a superconductor if a cosmic ray ejects one of its electrons. It would however be erroneous to conclude from that fact that quantum mechanics plays no role in superconductivity.


I don't think your true objection is based on quantum mechanics. Based especially on one comment[1], I think you're making a hidden assertion that subjective experiences carry moral weight only if they cannot be explained in terms of other understood phenomena.

I'd like to examine this assertion: If someone could prove to your satisfaction that your present conscious substrate is something other than a human brain--say, a computer program with inputs provided by an advanced android body in the real world--would you consent to undergo the proverbial 50 years of torture in exchange for a dollar donated to your favorite charity?

[1] http://news.ycombinator.com/item?id=2215074


Well obviously everything is based on quantum effects. When a transistor changes its state, quantum effects need to happen. But I don't think it's legitimate to postulate that the whole complexity of the quantum state is necessary to simulate the behaviour of a transistor.


I also think that general AIs will not be the kind that simulates brains.

But I think its wrong to tie consciousness to the hardware its running on. Consciousness is a property of the software. I think its fallacious to postulate a difference between an intelligence that runs on brains and an intelligence that runs on other machines if the difference can't be observed.

And yes I would be worried that a robot might suffer if it is sufficiently advanced to exhibit the same behaviour as, say, a cat.


What is "pleasure" and "pain"? Ultimately, they're just electrical signals triggering the release of certain chemicals in the brain, with subsequent alterations to how the brain processes other stimuli.

"It doesn't really feel pain, it just looks like it does" is probably how many people justified atrocities against other humans they viewed as somehow "lesser".


Maybe they're just electrical signals in the brain but that's not how we experience them.

A CPU cannot experience the electrical signals that flow through it. Unless you believe rocks are sentient.

I could just as easily turn your argument around and argue that torturing you is perfectly OK because all I'd be doing is creating electrical signals in your brain.


I could just as easily turn your argument around and argue that torturing you is perfectly OK because all I'd be doing is creating electrical signals in your brain.

That's actually (almost) the point of my comment - if, in two logical steps you can get from your argument to "torture is alright", perhaps you should reconsider your argument.

Perhaps you should also consider why (a large part of) humanity considers torture to be a Bad Thing?


But it's your argument that I transformed, not mine.

Humanity considers torture to be a bad thing because it believes in the reality of subjective experience, which you appear to be denying.


My 'argument', as you call it, was a straightforward extension of your comment about computers only being capable of "acting like" they feel pain.

How would you show that a silicon life-form lacks subjective experience?


You said :

What is "pleasure" and "pain"? Ultimately, they're just electrical signals triggering the release of certain chemicals in the brain, with subsequent alterations to how the brain processes other stimuli.

My argument then is that if all pain is to you is a set of electrical signals than why should one avoid causing it. We don't usually have moral qualms about generating electrical signals.

I can't show that a silicon life-form lacks subjective experience anymore than I can prove that a human possesses it. What I know is that I have subjective experiences and that there are certain things, like pain and death that I really like to avoid. Since other human beings are very similar to me and also exhibit behaviors in the presence of painful stimuli similar to my own, I assume that they also possess such experiences. I refrain from causing them unpleasant experiences because doing so would tend to create a world in which such experiences are more likely in general and thus make it more likely that I would in turn undergo such experiences.

Consider this thought experiment.

Suppose that in the relatively near future we create a machine of equal or greater intelligence than humans. Suppose also, for sake of argument, that, for whatever reason, these machines do not have subjective experiences, but behave as though they do to facilitate human interaction. Would such a machine consider killing a human to be wrong ? Why should it ? The human claims that it fears death and wishes to avoid it. The machine also claims to fear death but in reality feels no such fear. If the machine has been told by humans that there is no difference between it and them because both can be reduced to sets of electrical signals then it could only assume that a human, despite it's behavior, experiences no actual fear of death and therefore that killing a human is not wrong. So it seems to me that there is a real danger in making the assumption that any sufficiently intelligent machine is sentient.


Is there a meaningful difference between a machine that claims it fears death but in reality doesn't (for some definition of "fears"), and something that actually 'fears' death?

How can you tell whether something 'fears' death or not?


But that is how we experience them - though perhaps not with electrical signals, but chemical ones instead. e.g. dopamine. We're programmed to enjoy dopamine.


Humans experienced pleasure long before they knew anything about dopamine.

You seem to be rejecting the reality of subjective experience by equating it to its objectively observable physical correlates.

Moral judgements are not (at least historically have not been) based on such observables but rather on the assumption that that which causes oneself pleasure or pain typically does so for others as well.

You can put someone in an FMRI and find that certain brain signals correlate with that person claiming to feel pain (ie. correlate brain state with behavior) but you can never establish a causal relationship between those signals and the subjective experience of pain because subjective experience is by definition not objectively observable.

These sorts of philosophical arguments have been going on for a long time with little practical effect. However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving. If humans themselves deny the existence of subjective experience then why should machines believe in such a thing.


Computers also calculated pi long before we used them to design circuits.

"However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving"

You mean like we act towards animals today?


If I hit a rock and it cried in pain, I would believe it to be sentient.

Sentience does not come from the raw materials -- we're just a bunch of carbon, hydrogen, nitrogen etc. ourselves -- but from the way those materials are arranged to process information.


You can buy a doll at the toy store that does that. Do you believe they are sentient ?


There is not much of an objective difference between the pain reactions of e.g. insects and robots we have today. So I would say yes, those robots experience a similar reaction as insects. But then again, I don't think insects have a sufficiently advanced neural system to suffer, so it's okay to kill the critters.


AFAIK, creating electrical signals in somebodies brain can be considered torture.

I read about a new device that can simulate extreme pain, by creating electrical signals in your nervous system - that's torture.

If you harm an ant, is that torture? Or is the ant's brain to simple to make this this count?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: