Hacker News new | past | comments | ask | show | jobs | submit login

>Fooling people into mistaking a submarine for a whale doesn't show that submarines really swim

Why not? Presumably part of the act of fooling would involve 'swimming' submarines.

I think passing a __strong__ Turing test actually does say something about our brains, cognition, and consciousness. And by a 'strong Turing test', I don't mean dinky 5 minute tests with a "Ukrainian boy". Imagine you carry on a 20 year relationship with a computer pen-pal, having in-depth discussions about every-day things from movies, to music, to sports to family and relationships. Imagine such a computer program fooling every human it interacts with for decades at a time, I think that would say something about ourselves and I think it would render the question of consciousness meaningless. If it quacks like a duck, looks like a duck, walks like a duck, and you can't tell the difference between it and a duck, it's a duck.

Another problem is that philosophers and even regular people tend to relish in ambiguity when it comes to certain ideas and concepts, even trying to elevate them to supernatural levels. Things like free will, love, or consciousness are apparently outside of the natural world, and not subject to natural laws. I think that's wrong. I think the answer is much simpler and much more humbling than we are willing to admit.




My big takeaway from the Chomsky quote is that it's useless to ask if computers can think because one of the fundamental properties we associate with thinking is some biological/spiritual aspect. For a computer to "think", we'd need to redefine what we mean by think. And if we have to redefine what it means to think, why bother hailing it as an accomplishment to make a computer "think". The properties are incredibly important to our understanding of the world. The fact that we have different words for "think", "compute", and "calculate" gives some insight into the value we assign to the distinction as a culture. If those differentiations didn't exist, the problem of simulating cognition wouldn't change. What would change is our perception of the problem. If thinking, calculation, and computation all share the same word, it becomes a question of degrees of "thinking" rather than substantially different processes.

I don't think Chomsky is trying to elevate certain things outside of the laws of nature. He's describing how what we choose to differentiate changes our fundamental perceptions of those things. Submarines could "swim". They could also "read", but those words have a very specific set of properties associated with them.


> we associate with thinking is some biological/spiritual aspect

Well I don't.

> The fact that we have different words for "think", "compute", and "calculate"

Also my native language (Finnish) doesn't have separate words for "compute" and calculate". Both are covered by "laskea". (Also "suorittaa" is used, but that is "execute".)


So the words for a calculator (small electronic device used to do mathematics, usually arithmetic) and the word for computer (like a general purpose desktop computer, the thing that runs and OS like Apple OS X or Microsoft Windows 8) are the same? I'm assuming that you do have a different word for think though as you didn't mention that.

What about arvioida? Would that be used for compute?

This might be an instances where the Sapir Whorf hypothesis (linguistic relativity, http://en.wikipedia.org/wiki/Sapir_Whorf) comes in to play. The idea would be that you don't consider computing and calculating to differ from thought because the [primary] language you use [or grew up using] doesn't make that differentiation.


> So the words for a calculator ... and the word for computer ... are the same?

No no. We have laskin ("calculator") for calculator and tietokone ("data machine") for computer.

If you don't know any other language than English, you'll be surprised about how many different ways different languages have come up with for words for modern things, like the computer. The etymology does not necessarily resemble that of the English language.

> arvioida

That is "to estimate".


Thanks for your informative response.

Actually, I think that just knowing English well one can see the same effect as knowing a few other languages - the etymologies of words being dispersed amongst Latin, Greek, Scandinavian, Germanic and French origins (as a first pass, of course there's influence from many, many languages) makes it easy to see how many words can develop for the same thing, each having a subtle twist of meaning. Like the use in English of beef/cow for cooked meat vs. the animal.

The only words I know for a computer are computer (English derived from a name for a person who calculates values), ordinateur (French, origin is Latin to do with organising/ordering; close to English "ordinator"), cyfrifadur (sp? Welsh, origin is account-er; similar derivation as English), and rechner (German, a cognate with English "reckoner") ... but in these cases I think everyone normally uses just "computer" or a transliteration of it like in Kswahili ("kompyuta", don't quote me on that spelling!).


Well in addition to Finnish "tietokone" (data machine, or information machine), there is at least Swedish (also Norwegian) "datamaskin" (data machine), nowadays usually shortened to "datorn", and Turkish "bilgisayar" (information counter).

So form of "computer" seems to be the choice in a large number of languages, though.


But you do have a different word for "think", right?

Google Translate shows multiple, but these three seem most applicable:

  ajatella: propose, consider, think about, weigh, cogitate, think
  miettiä: think about, think, consider, reflect, contemplate, ponder
  luulla: think, believe, suppose, imagine, expect, suspect
The difference between "think" and "compute" is the important distinction being made, and I tend to agree that moving the goal posts does nothing to help decide whether a computer is really capable of thought.


> But you do have a different word for "think", right?

Yes, sure.


There are a few problems with this line of reasoning.

First, that there is an implicit assumption about how cognition and consciousness are defined, but there is no real definition.

Following the example, it's not defined what a duck is, and "philosophers" actually don't try to elevate the question - they try to find answers to it.

While a "strong" test for example would look appealing, the problem is that there is no real model of weakness/strenght, and while a human would pass all the possible models of humanity, a computer would surely pass only the ones it's programmed for.

Another thing is that in order for a computer to truly mimic a human, that is, to fool people for 20 years about movies/music/sports/family/relationships, but also other abstract experiences, the computer would need to experience them, and especially, elaborate them in a human way.

Which is again, very "open", and it's the core problem.

Even excluding the openness problem, the Turing test in the way it's posed, looks to me as the photorealism problem. You can achieve it in a static picture, but once you freely move in a 3D world, you see the flaws that show that what you're experiencing in reality is a limited set of limited algorithms, which are used to workaround hard problems (workaround is the key).

I think exactly the same arguments stand, that is, in order to mimic a human in such a high level of faithfulness, very hard problems would need to be solved, not just worked around.


The very hard problems you are referring to are called Strong AI. If your argument is that passing a proper Turing Test requires Strong AI, then, well, duh?


>First, that there is an implicit assumption about how cognition and consciousness are defined, but there is no real definition.

And yet volumes are written on the topic, and you have no trouble dismissing this particular approach as not really answering "consciousness". Well what do you mean exactly then?

>and while a human would pass all the possible models of humanity, a computer would surely pass only the ones it's programmed for.

I don't agree with either the first nor the second assumption a priori. Would a schizophrenic, or autistic or infant brain pass "all the possible models of humanity". And second, why would a computer pass only the models it was programmed for?! We know that isn't true with today's software and today's AI. IBM's Watson wasn't programmed with every single fact it used to win Jeopardy, nor is any rudimentary video game AI programmed with a specific set of behaviors that it executes regardless of player actions. If you want to go deeper, I can also claim the human brain itself was shaped by natural selection for a finite and very specific set of tasks, and the neural machinery is as deterministic as software since both are subject to the fundamental laws. So tell me why can't software simulate human behaviour again?

>Another thing is that in order for a computer to truly mimic a human, that is, to fool people for 20 years about movies/music/sports/family/relationships, but also other abstract experiences, the computer would need to experience them

No necessarily. It can lie. Or it can live them by ingesting huge amounts of information from digital content, or maybe it was trained in a lab or adopted by a family and raised like a human. Whatever.

> and especially, elaborate them in a human way.

Define "human way", because that's the entire point. I don't see intrinsically why such a computer program could not be built. What's so special about the "human way".

>You can achieve it in a static picture, but once you freely move in a 3D world, you see the flaws that show that what you're experiencing in reality is a limited set of limited algorithms, which are used to workaround hard problems

And you base this on what? I mean you're just asserting it can't be done, why? Because human brains are powered by magic!?

>I think exactly the same arguments stand, that is, in order to mimic a human in such a high level of faithfulness, very hard problems would need to be solved, not just worked around.

I'm not sure what the difference is between solving problems and merely working around them, in your context. But yes, we're not there yet, obviously. I think the Turing test is actually deeper than most people give it credit to be. And you illustrate this perfectly. All your objections are hand-wavy appeals to some vague notions of the "human way".


In any case, neither of you have fooled me yet.


> Imagine such a computer program fooling every human it interacts with for decades at a time, I think that would say something about ourselves and I think it would render the question of consciousness meaningless.

Maybe we're operating with different notions of "the question of consciousness", but I have to disagree that a perfect simulation of a human mind would dissolve the problem. And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).

I suppose my question to you is this: do you believe there is something it is like to BE that simulation, to have a first-person subjective experience from the perspective of the computer?

I agree with you that elevating certain aspects of reality to "supernatural" status is unhelpful; it's the equivalent of saying "sorry scientists, but this stuff is out of bounds." (I'm also not convinced that dividing reality into natural and supernatural is even a coherent distinction, but that's another discussion.) However, I have to take issue with your suggestion that the answer to consciousness is "much simpler... than we are willing to admit." It may turn out that a future science will come up with a very satisfying answer to the Hard Problem of Consciousness, but I suspect that this will require a paradigm shift so radical that it will make current neuroscience look like phlogiston theory.


>Maybe we're operating with different notions of "the question of consciousness", but I have to disagree that a perfect simulation of a human mind would dissolve the problem.

Because human brains are magic?

> And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).

You have your answer, the effect of consciousness clearly arises from physical matter that is subject to natural laws of physics and shaped blindly by natural selection. That should tell you that it is quite possible. For example, for a related concept of Free Will, I think we can confidently say that for all intents and purposes we don't have any, and yet our brains produce a very powerful subjective feeling of having it. This is why I think Consciousness is a much simpler problem than it really is. We have a cognitive bias for seeing the world in a very specific way, just as we have a cognitive bias in visualizing 3-dimensional spaces and we have incredible problems visualizing higher dimensions (heck, we can't even visualize a 2-D or 1-D space without embedding it in a 3-D space).

Consciousness and free will are very powerful illusions hard-wired into our brains, unless you think the particles that make us up, somehow don't follow deterministic (or random in case of QM) natural laws?


> Because human brains are magic?

See, this is the problem with this virulent strain of Scientism making the rounds right now. There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic". It seems like a knee-jerk reaction to religion, like anyone that posits that we don't have a complete understanding of reality is therefore a crypto-theist. It completely misunderstands the nature of scientific progress, the evolution of our conceptions of reality in response to empirical evidence.

It's easy to picture human progress as millennia of trudging through continually-decreasing ignorance and finally arriving at the Correct Answers, but this can be a dangerous view. Science works best when we aren't constrained by dogma -- when we're allowed to "think outside the box" and consider the world in new and revolutionary ways.

People in the thrall of this Scientism often act like raising the Hard Problem of Consciousness is some kind of intellectual weakness, that we're just not able to "get over" the fact that subjective experience happens to emerge from physical matter. Nobody here (or at least not me) is saying that the Hard Problem means that subjective experience is "magic" or that we have anthropomorphic souls that float up to heaven with angel wings when we die. The Hard Problem of Consciousness is still a scientific problem, and I'm not saying it can't be solved by science. But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving; the simulation of a conscious entity does not help me understand why being in the world feels like anything.


Do you believe a perfect simulation of a human mind would result in something with an identical behavior to that of the person whose mind we're simulating? yes or no.

If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption in the question (that identical behavior implies identical properties). If no, then, again, you're contradicting the assumption in the question. You think consciousness is not determined by behavior. Ok, then what determines whether or not something is conscious?


> Do you believe a perfect simulation of a human mind would be able to create something with an identical behavior to that of the person whose mind we're simulating? yes or no.

Sure, I'll admit the possibility exists.

> If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption given in the question (that identical behavior implies identical properties).

I must've missed this... who was saying that identical behavior implies identical properties? Sure, I'd probably disagree that the A.I. is conscious in a subjective sense, and I'd also probably disagree that identical behavior implies identical properties. People can imitate each other without taking on the properties of the imitated.


[deleted]


> What determines whether something is conscious?

I don't really have an empirical "test" for subjective consciousness beyond my own immediate, first-person experience of it. This may sound like a concession or even a defeat, but I think I'm allowed to posit that phenomena exist which we currently lack the empirical tools to investigate. "Currently" is the key word; as I said before, it is arrogant to assume consciousness will forever remain a mystery to scientific inquiry, just as it is arrogant to assume it must be a simple extension of existing theories.

I admit I have nothing beyond my own experience to validate the idea of subjective perception, and I have no evidence beyond intuition as to whether or not a machine can "experience" input the same way a brain can. However, I think I'm still entitled to believe that subjective experience is a real phenomena whose nature can and should be explained, and that our scientific understanding is presently inadequate for this task.

EDIT: I can understand the fear of relying on intuition. After all, it's the same thing that led us to believe that lightning came from the gods. But that doesn't mean that we should throw out the entire experience of perceiving lightning. Clearly lightning is a phenomena we experience, but we still don't understand how photons entering our eyes produce the subjective experience of blinding whiteness, or how vibrations from thunder translated into electrical signals by the ear result in the subjective experience of the sound itself. The information is in the brain, but we still don't know how information becomes experience. This doesn't mean we have to explain it via gods, but it does mean we still have something left to explain.


>See, this is the problem with this virulent strain of Scientism making the rounds right now.

We have a very good understanding of the fundamental forces and particles that govern the brain and our everyday experience. That doesn't mean that we'll use the vocabulary and mathematics of fundamental physics to explain brain processes, just as we don't use particle physics vocabulary when we model Hurricanes or explain cell processes. Nevertheless, whatever model or explanation you come up with for Consciousness better square with those fundamental physics otherwise you're going to be in the crackpot territory. That's not Scientism, that's just a fact.

>There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic".

Again, Quantum Mechanics, and the Standard Model (as well as the laws of Chemistry that abstract those) are not going away. Evolution and Natural Selection is not going away either. That constrains the kinds of explanations we will have for Consciousness. If you think understanding Consciousness will overturn either the Standard Model or Evolution, you're going to be very disappointed. Again, that's not Scientism, that's just a smart prediction.

>But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving

I didn't say it will solve Conciousness. I think Conciousness is an ill-defined concept, but yet many people have very strong feelings about. I speculated we'd probably see it as such when (if) we are capable of building such a strong AI, probably before that.


You should try harder not to assume the people you're talking to are morons. The person you're debating doesn't think evolution is going away, not even a little bit, not even for a second. That you don't recognize this means you should be more charitable in understanding his point of view.


Eh, I'm willing to cut them some slack. I talked about science as a continuous process of revision, which might give the impression that I'm making a Pessimistic Induction argument (http://en.wikipedia.org/wiki/Pessimistic_induction) against all scientific conclusions. I think macspoofing was just trying to counter with examples of scientific theories that seem pretty airtight, and that's fair. Also I basically accused them of falling prey to mindless Scientism, which was admittedly a bit harsh (seeing people attribute "magical" explanations to skeptics can trigger my rage mode, apparently).

Anyway, this whole thing is very controversial (they don't call the Hard Problem "hard" for no reason), and I can see the appeal of trying to safeguard the scientific process of knowledge-building from the messy weirdness of subjective experience. Time will tell if consciousness can be explained by a more advanced physics. I'm certainly looking forward to it.


>That you don't recognize this means you should be more charitable in understanding his point of view.

Should that understanding have come before or after being accused of 'Scientism', which feels like an insult, but I can't be sure because I don't really know what that means in context.

>The person you're debating doesn't think evolution is going away, not even a little bit, not even for a second.

I didn't imply that he did. The point I was trying to get across is that there are some real constraints on the type of explanations we'll have with respect to Consciousness. We are not going to need unknown exotic physics to explain it, and whatever the answer is it will stay comfortably within the current Evolutionary framework. Obviously that could be wrong, but I wouldn't bet on it. This should not be a controversial statement.


to have a first-person subjective experience

You actually can't prove _anybody else in the world_ besides you has a first person subjective experience. Trying to prove a magic talking box has experience when you can't prove another moist robot has experience is moving the goalpost too far out.

require a paradigm shift so radical

Not really, we just need more high speed training data. It'll all be available to way too many people, governments, and companies within the next few years.


> You actually can't prove _anybody else in the world_ besides you has a first person subjective experience.

That's absolutely true. My wife and I sometimes joke that each of us is an incorporeal figment in the other's dream. It's impossible for me to know with certainty that I am not the only real consciousness in existence. It's an interesting line of thought, but ultimately unproductive; what could I do if it's true? If it's not, then the validity of all of the other consciousnesses is just as pressing as mine. Either way it behooves me to behave as though it is true, so I suppose that's the starting point for all of my thoughts on this topic: I and all other humans have a first person subjective experience. I agree with (parent? gp?) that describing the nature of that experience is non-trivial.


> Trying to prove a magic talking box has experience when you can't prove another moist robot has experience is moving the goalpost too far out.

Maybe I'm just confused about what goals we're talking about. If your goal is to understand cognition, then sure; a highly intelligent machine is a great way to do that. However, the comment I replied to was suggesting that strong AI would "render the question of consciousness meaningless"; that's a far stronger claim, and in my opinion, unrealistic. I think you and I are actually in agreement on this one... if anything, I was arguing against A.I. having the goalpost of understanding subjective experience.


strong AI would "render the question of consciousness meaningless"; that's a far stronger claim, and in my opinion, unrealistic.

That's actually a great point many people misunderstand, largely due to nobody pinning down a definition for "Strong AI."

I prefer the term "computational consciousness" instead of Strong AI. It gets the point across better about future AI actually _thinking_ and _experiencing_ instead of having people misconceive AI as just a clever if/else decision tree.

Strong/Hard AI (and human consciousness) is a combinations of algorithms and data. People have hard-wired algorithms for processing data from their senses. Some people have better hard-wired algorithms than others. But, if you take the smartest person alive today back in time and raise them in an isolated environment (i.e. limit their data intake), they won't be the same person and they won't be able to think the same thoughts.

Summary: AI = mostly data, with algorithms to help organize/cluster/recall things. You can't have recall and intent of agency without a self-directed consciousness controlling the internal state<->external world feedback gradient.


I think we agree. And I think Turing's point in choosing a conversation as the test was to pick something that would indicate the vast range of human experience. If a computer could pass a strong version of the test as you describe, then I would agree that we would have say, as you do, "if it looks like a duck, .... then it is a duck."

But I don't agree that it renders the concept of conciousness meaningless (or at least any more meangingless, depending on what you think about the concept now). On the contrary, I think we might have to say, "this computer is probably concious", and afford it all of the rights that we do for humans.

BTW I don't think either Turing or Chomsky would say that >Things like free will, love, or consciousness are apparently outside of the natural world

I obviously don't speak for either of them, but I'm pretty sure they both subscribe to the theory of causal determinism and computational theory of mind.


I'm not so sure that just because you can fool me that an object is a duck, that it is a duck. At least when applied to general AI. Nor do I necessarily agree that just because I can have a conversation with an algorithm, that we should give that algorithm all the rights given to humans.

For example, if I say mean things to that algorithm, even one that can fool me into thinking it's another human for decades, is that morally wrong? Even if it's just setting some variable in memory to sad=true? If so, is it morally wrong for me to create a program consisting only of a singular red button, that when pushed, sets the sad flag?

There was another "game" created a couple years ago that featured a fictional human (a little girl, if memory serves) on life support. They programmed it so that unless somebody in the world clicked a button in their browser, the girl would die within 10 seconds and the game would basically delete itself, ending forever. (The response was so strong that people flooded the server, and I believe their hosting provider blocked access to their site, thereby killing the program untimely.) If the little girl was removed, and the goal was to just keep the program running as long as possible on life support, would that be morally wrong?

I ask these questions because I don't think a even a complex script which is doing nothing more than attempting to fool me should be considered worth assigning rights to, even if it's really good at it. To be honest, I'm not entirely sure how to define consciousness in this regard, but I suspect it would require surprising its creators to the point that they cannot fathom how it behaved the way it did. Or maybe I'm confusing consciousness with free will. Either way, if the only thing separating a program that deserves human-level rights from those that don't is some combination of power (enough to scan all the valid responses and deliver the best one) and a wide array of responses linked to conversations, then I'd argue that all programs deserve the same basic human rights.


is it morally wrong for me to create a program consisting only of a singular red button, that when pushed, sets the sad flag?

We could discover tomorrow that there's a "sad ganglia" in the human brain that can be set on or off with an electromagnetic field. Does that mean that humans are ultimately biological machines without rights?


I think it's extremely likely that humans are going to have a very hard time in the near future coming to terms with the fact that we are what are "in" our brains, and that our brains are basically just biological computers whose inputs, outputs, and how it converts the two, are very much a part of the natural universe and can be explained. And once we can explain it, we'll be able to predict it and thereby control it.

I know what you're asking and the point you're trying to make, and I totally agree that it's an interesting problem. Perhaps one method is that non-humans will get rights when they evolve to the point to create their own rights and the means with which to stop other things from trampling on the rights they have assigned themselves.


I'm with you. I might have said it a little differently in that I believe that rights are societal conventions and written agreements that we give each other and protect for one another. Without conscious agreement to the system in which those rights exist, other bases need to be established.


Perhaps, but "feelings" are more complicated than that. Pain isn't just a little flag that gets set to TRUE. At the very least, it's rewiring your neurons to avoid that experience, initiating instincts, and affecting your entire brain.


Definitely, but then again any AI worth a damn isn't going to have a "sad flag" either.

My point was that understanding the nature of an emotion in a trivial way should be orthogonal to how we think about what rights that being should have. At some level, we're all machines. Just because one's software runs in silicon vs gray matter; just because one's hardware was deliberately built and is understandable in computing terms doesn't mean that we really understand what it is to be sentient with respect to rights to be free and exist.


To be honest, I'm not entirely sure how to define consciousness in this regard, but I suspect it would require surprising its creators to the point that they cannot fathom how it behaved the way it did.

It absolutely would require that. If you believe in determinism then part of the definition of intelligence, or an intelligent system, is that it exhibits behaviors that are just too complex for a human consciousness to intuitively follow the causal chain. Incidentally, biological intelligence is built on top of some other systems which, given our current levels of understanding, also meet that criterion themselves, so we're pretty clueless about how it works.

Part of this is a definition problem - the Turing Test was defined vaguely enough that everyone has different conceptions. When a person talks about a "good" or "strong" Turing Test, they are envisioning one that would pass all of their personal standards and all the ways they could think of to trick it. And when they talk about it with someone else, who likely envisions a somewhat different version of the test, there seems a tendency for that person to assume that their version would not be passed, so they start to talk past each other.

In other words, if an AI were to consistently surprise you with thoughtfulness, compassion, creativity, or whatever other constituents of "true intelligence" you assume the duck imposter would lack, would you then confer it those rights?

I think you would, because that's the point: it has thoroughly convinced you that it is "thinking", and you feel like it truly "understands" you - frankly a higher bar than many rights-granted humans would pass.

What separates animals that deserve human-level rights from those that don't? Would you argue that all animals do? If not, I would say that that distinction is no less arbitrary than the one you're drawing at the end.


> it would require surprising its creators to the point that they cannot fathom how it behaved the way it did

That requirement by itself is probably the easiest of th requirements.


>>If it quacks like a duck, looks like a duck, walks like a duck, and you can't tell the difference between it and a duck, it's a duck.

If, later, you discover evidence contrary to it having been a duck, can you revise your belief? Or was your having been convinced of its duck-ness sufficient to make it a duck?


I agree. The only proof you have that I am a conscience being is that you believe that to be so because I conduct myself as one. You can't really prove that I am or am not conscience. You can only observe that I act like a conscience being.


I would guess that most contemporary philosophers do not fall into this traps but naturally they get no attention.

Why?

Journalists/ Interested parties ask philosopher about the mind/body problem, the matter of truth, god.... and if 9 out 10 answer that there is no real problem just misconceptions then the journalist will go to the 10th one who will get air time.


>If it quacks like a duck, looks like a duck, walks like a duck, and you can't tell the difference between it and a duck, it's a duck. //

Just because you can't tell that something is false, or perhaps just can't elucidate the reason, doesn't make it true.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: