Hacker News new | past | comments | ask | show | jobs | submit login

"Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run."

Transistors don't need a clock in order to run. They can in fact be set up to create their own clocks. The purpose of clocks is for synchronization across the chip so that we mere mortals can modularize the operation of a CPU. That is, clocks exist mostly so that we can think in terms of sequential gate operations (or from the programmer point of view, assembly code).

The author seems to confuse the chosen approach to designing computers (VLSI), and the actual physical capabilities of a transistor. We have opted over the last forty years to develop the CMOS logic gate way of organizing computers. There are other ways, as the brain demonstrates, but it is not clear at all that you can't do it with novel transistor topologies.




Whenever I see comparisons of the brain to a computer along with the implicit suggestion that computers will someday replicate brains, I am reminded of this passage from Leibniz's Monadology and that today's computers are but yesterday's mills:

"Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. Further, nothing but this (namely, perceptions and their changes) can be found in a simple substance. It is also in this alone that all the internal activities of simple substances can consist."

[http://philosophy.eserver.org/leibniz-monadology.txt]

[http://en.wikipedia.org/wiki/Monadology]


Whenever I see Searle's Chinese Room argument, or a version of it, invoked to show that brains cannot possibly be computers I am reminded of a passage of Terry Bison's "They're Made out of Meat" [1]:

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."

"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."

[1] http://www.eastoftheweb.com/short-stories/UBooks/TheyMade.sh...


Despite it's entertainment value, there is nothing in the passage which supports the belief that the brain is a computer (or even a "meat computer") or demonstrates why the analogy of brain as computer is logically different from brain as mill. That's not to say that the complexity of a computer may not create a more attractive analogy, but one must keep in mind that no matter attractive Camelot appears on screen, it's only a model.


You quoted a statement Leibniz made about 300 years before Turing proved that there are machines that can compute everything that is computable. I and others explained in comments below, why we have reason to believe, that computers are fundamentally different from mills and that Leibniz's argument as well as Searle's fall flat. By the way, it is not the necessarily the "complexity of a computer that may create a more attractive analogy". The physical parts of computers can be less complex than those of mills. I wonder how Leibniz would have reacted if he had been shown Google. Would he have believed, for example, that the essence for its face recognition software must be "sought in simple substance"?

For more rebuttals of the Chinese Room argument see its Wikipedia page. I like this one in particular: A guy sits in a room and waves a magnet up and down therefore creating electromagnetic waves. But you don't see light coming out. So light cannot possibly be electromagnetic waves.


Whenever I see an argument about we fully knowing how brains work, I remember some Turing-Gödel stuff that says that we would need much bigger brains to really do that. And the big brains will only be able to get an answer about smaller brains, not about themselves.

So, any final answer about our brains requires far more computing power, in whatever form (mechanical or biological or whatever) than we have now.



But Leibniz (unlike Searle, who is pretty inconsistent on this point, apparently) isn't denying or overlooking the similarity between the idea of "sentient meat" and that of sentient mill mechanisms. It's precisely the basis of his argument.


Could you explain a bit?

This is how I understood the argument: Parts can’t form machines which perceive (I couldn’t find a justification for this assumption – that’s the sticking point), therefore perception is not created by parts but by a substance.

The meat comment targets the first part of the argument. Leibniz seems to make an assumption (parts can’t from perceiving machines ≈ meat can’t think) which he doesn’t really justify.

I thought he was using the mill as an example to show that perceiving machines made from parts are self-evidently absurd, to ridicule the idea of perceiving (or conscious) machines. What am I missing?


Yet another dead/hidden comment that shouldn't be: http://news.ycombinator.com/item?id=2598559 by brudgers. Here's the text:

Keep in mind that the analogy of the mill is only one element in support of Leibnitz's central argument in the Monadology. A crude outline:

    Perception exits. 
    Perception cannot be found in the parts. 
    Therefore there must be something else. 
    To avoid the same problem the something else
    must not be composed of parts.
    Therefore a simple substance [the monad] which 
    perceives must exist.
What the passage from Leibnitz suggests is that the step from drawing an analogy using most sophisticated technology of the day as representative of the brain to the belief that the brain is the technology is not unique to the invention of computers.


Leibniz was an idealist. He lived in peculiar ideological waters: he was Christian. He had to believe that there's the mind, there's reality, and both never touch. If he ever went out the party line, the inquisition would take care of him.

According to Leibniz, neurons can't exist. Exercise does not improve your mood. Brain surgery is useless. Caffeine does not work. Hormones do nothing. Coma is unexplainable by definition. And so on.

Reality killed Leibniz.


Actually the theory of monads bypasses the mind/body dualism by asserting a kind of synchrony. Spinoza made the same move in a different way by claiming God is the multi-dimensional plane of existence whose varying realms (mind and extension [body] being just two of an infinite number) are completely synchronousness. You're also off with the Christian comment since Leibniz wasn't within the domain of the Catholic church.


St. Luke understood the brain to be the home of mental faculty and the situation never really changed. Especially once Aristotle re-appeared, the brain was definitely understood as an organ that processes sensory input can be affected like other organs. Caffeine, exercise, possibility of surgery all known. Brain speculated to be involved in comas. Existence of neurons speculated.


Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.

That seems like too big of a leap. Why can't "parts which work one upon another" not create perception? Is this assumption justified? (I really hope the justification is not "because mills that perceive are self-evidently absurd". If it is then my next question is: Why?) If that assumption doesn't stand the rest of the argument falls flat. I would like to hear a bit more about why compound parts can't create perception.

The brain certainly looks like it consists of compound parts. We obviously cannot (yet) explain consciousness so it is possible that it is impossible to create consciousness with compound parts, we just don't know. In that light Leibniz's argument seems unconvincing, at least as far as I can understand it.


I believe that parent's point is precisely that it is indeed unconvincing and so we should in turn be mindful that our own idea that the brain is a computer will, it is possible, given further technological advances, come to strike us as just as unconvincing as Leibniz' argument. Metaphors of the brain follow along with the state of the arts of technologies. For the ancient Greeks, the human composition (they didn't talk yet of brains and consciousness) was the city-state; for early modern natural philosophers, the mill and the like; during the industrial revolution, machines; and now, with us, computers and computation. It is unlikely that our current analogy is any more or less true than previous ones, that is to say, they may be illuminating in some way and not entirely useless in our thinking about brains and consciousness, but as it goes, they are not and cannot be true because the conjectures cannot even in the end be shown to be wrong. Consciousness and the brain may not be in the end inexplicable, but I very much doubt that we will find that the brain and consciousness are something else other than the brain and consciousness, like, for instance, a mill or a computer. It would be the most surprising thing of all were we to find out that evolution produced a kind of technology before in fact that technology as a human tool was ever produced -- but that is a different argument and I'm not even sure what I mean by it.


Your reasoning is wrong. There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so. That is, that a computer can, in principle, simulate mills as well as city-states and all the other objects you listed.

"It would be the most surprising thing of all were we to find out that evolution produced a kind of technology before in fact that technology as a human tool was ever produced"

Evolution has produced technologies long before humans invented them. For example, bats and whales use sonar, humans use lenses, plants use photosynthesis. In fact, one could argue evolution does not much else than to invent technologies.


"There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so."

That depends entirely on what you think a brain is, and it just so happens that a prevalent view these days is that a brain is a computer, and so of course it follows very easily that a computer can simulate a brain.

But once that assumption is gotten rid of, it doesn't follow so easily at all. People are ignoring a good lot of scientific method when they propose that were a computer to every simulate a brain the veracity of that simulation could ever be verified. The only data that could ever be provided would be a very limited set, namely, to the external behaviors. What would essentially be needed in the simulation, that is the production of consciousness, could never empirically tested. Consciousness as an object isn't even a scientific entity, it's something that can only be corroborated internal to the agent that has it. It isn't analyzable into parts and it isn't able to be abstracted out of its environment, and so cannot be turned into a scientific object. The brain, on the other hand, we might suppose is an object fitted to empirical methods, but what people really want to get at is consciousness, and the brain without that is probably not the problem most people have in mind. So one could simulate the brain but only on a very restricted very of what a brain is.

On the other hand, on second thought I do agree with you that evolution produces technologies and even maybe that that's all evolution has ever done. I'm afraid I'll have to think about that point more if I am to respond to it specifically, though I think it's tangential to the rest of the argument I am trying to make above (which is admittedly sketchy).


I'm having a hard time with your argument. Some things jump out:

The only data that could ever be provided would be a very limited set, namely, to the external behaviors. - This isn't true at all. We slice open brains to find out how they work. We stick things in them to measure and stimulate them. One fellow even had a camera installed to produce phosphenes via electrodes on his visual cortex. Science has gone well beyond external behaviours. (We are talking about the mechanics of brains here, so B. F. Skinner doesn't get an invitation. Once the brain simulation is running and babbling then he can psychoanalyze it all he likes.)

Consciousness as an object isn't even a scientific entity, - That's good news, not bad. If it's not a scientific entity, we don't have to worry about it or hold it up to scientific scrutiny. Similarly we can stave off worrying about whether it has a soul, and which God it belongs to should it in fact have a soul--whatever that means.

but what people really want to get at is consciousness, and the brain without that is probably not the problem most people have in mind. So one could simulate the brain but only on a very restricted very of what a brain is. - A brain is whatever science says it is. The word you want to (and are free to) play with is consciousness. For maximum convenience, you may define it as the quality a brain produced by sexual human reproduction has and that which an artificial brain made by humans has not. Whatever it is it makes us very special indeed!

There was more sarcasm there than I would have liked, but I can't resist. So I apologize, but I hope you see my point: arguing that a simulation is impossible because of some mysterious scientific non-entity is simply not a scientifically valid argument.

(Bio-/electro-)mechanically speaking there is no known inherent barrier to simulation. Roger Penrose likes the idea that there is some sort of quantum entanglement responsible for human consciousness but it's not supported and he's having a hard time even demonstrating that his specific claims are possible. So for now, I am happy to assume that (robust, human-like) brain simulation is possible in principle. I don't know, of course, and I probably will not live to see it. Then again, computing is a runaway train and there are some compelling players in the simulation field at this time--so who knows?


Consciousness as an object isn't even a scientific entity, - That's good news, not bad. If it's not a scientific entity, we don't have to worry about it or hold it up to scientific scrutiny. Similarly we can stave off worrying about whether it has a soul, and which God it belongs to should it in fact have a soul--whatever that means.

Ah, but this is the crux of the Chinese Room argument. You're saying that since we can't measure it or detect it, it doesn't exist, and therefore isn't important. When you have a brain simulation going and having a conversation with it, that's enough for you. It says it's conscious, and we can't measure consciousness, therefore it is conscious.

I disagree. Strongly. That brain simulation is a zombie. The man in the room does not speak Chinese.

But we'll see. I actually think we will, Kurzweil might be somewhat of a crackpot with his singularity, but his extrapolations of increases in processing power say that we'll have the power to simulate a human brain in a pocket-sized thing in 2030-something, and the power to simulate all of humanity in 2040-something. I'll be around 60 years old then, and hopefully still sticking around. :-)


You're saying that ...

No, you're putting words in my mouth. I'm arguing the scientific facts of the matter, and I believe I did a good job of making my personal beliefs and opinions separately.

It's especially important to note that my treatment of the term "consciousness" was predicated on the given assumption that it is not a scientific entity. At the moment the term has a handful of scientific meanings, but none are what people are after. My argument is that you can't argue meaningfully about it until you can define it meaningfully.

P.S. What do you mean by "zombie" -? Traditionally it means something different than what I think you mean by it.


No, you're putting words in my mouth.

Sorry about that.

My argument is that you can't argue meaningfully about it until you can define it meaningfully.

Agree. The problem is that I think consciousness is inherently subjective. We experience qualia (http://en.wikipedia.org/wiki/Qualia) and you just can't objectively describe those. For example, could you describe the colour blue to a blind person? Every time you look at a blue object, you know it is blue, you experience that it is blue, but you can't describe it, and you can't compare it with for example my experience of blue objects. I have no idea if blue objects "look" the same in your mind as it does in mine. We can both agree that a certain object is blue, we both associate the effect blue light has on our eyes to blue objects, but that's it.

So imagine now that we make an AI with sensory inputs, perhaps we make a robot, perhaps we simulate a human brain, and we teach it that blue objects are blue. If we then show it objects, it should be able to correctly tell us if they are blue or not. But does that AI experience blue? We don't know. We can't ever know.

Imagine then that we "upload" your mind into a machine and put it in a humanoid robot. That thing would then walk like you, talk like you, remember like you, laugh like you, joke like you, cry like you, etc.

But would it be you? Would it be alive? Would it have consciousness? In my opinion - no. It's moving, but it's dead, so it's a zombie, a philosophical zombie: http://en.wikipedia.org/wiki/Philosophical_zombie


I was curious about the mention of the zombie before because this line didn't seem right: That brain simulation is a zombie. The man in the room does not speak Chinese.

I'm fairly certain that qualia--whatever it is--is not a necessary quality for the room to be capable of comprehending Chinese. It sounds like you would agree. The problem with qualia though is that there's no way for you or I to know whether the other participates in the phenomenon.

I'd like to mention that Dan Dennett has done a wonderful job of arguing that qualia is a bad term. I disagreed at first, but by the end of reading his argument I could only agree that the term is hopelessly ruined. You'll see his name in the Wikipedia article you linked. If you read his Quining Qualia[1] (I believe that is the one, though it may have been a follow up that I found so convincing) you might see the problem with arguments about whether something has or hasn't qualia.

1: http://ase.tufts.edu/cogstud/papers/quinqual.htm

As far as I'm concerned, if my brain were adequately simulated in a computer it would be like a plaster mould of a plaster mould. They're not the same thing, but functionally they are. If qualia exists I'm happy to assume that it's the product of a physical process which can be simulated. If it does not then I am quite certain the thing people mistake for it can be simulated--and perhaps already has been.

My attitude on the whole qualia matter is: how important can a thing be if no one can define it well enough to measure it? More importantly, I don't believe in magic. If something special is going on there I believe it's a product of natural processes, not a cause of them. "If it's there it can be reproduced."


My attitude on the whole qualia matter is: how important can a thing be if no one can define it well enough to measure it?

In my opinion: Very important. But I'll read that paper you linked, and we'll see if I still believe that. :-)


    You're saying that since we can't measure it or detect it, it doesn't exist, and therefore isn't important
If you can't measure it or detect it, how would you know it's there and it's the source of the phenomenon? If I tell you that it's not consciousness but magic pixie dust, invisible and non-detectable magic pixie dust, would you start searching for Neverland?


I am conscious. I experience being conscious every waking hour of my life. I don't know why, I can't objectively describe the experience of it, but I know it is there.

I can't speak for you, but given that we're of the same species, I'm gonna assume that you are conscious like me.

Conscious machines? No, that'll require a lot more convincing than a song and a dance.


I don't understand. Is consciousness a feeling or an external manifestation? It's something purely internal or something also visible on the outside? If it's only internal you cannot, by defition, say if a machine is conscious, but also every other human. If tomorrow we would discover an alien civilization and start a meaningful communication, would you question their consciousness?

Personally, I don't think that all the discussion about consciousness is meaningful. Consciousness could be the human feeling of abstract reasoning, like cold is the human feeling of registering a lower temperature. A machine could reproduce abstract reasoning, and asking about its consciousness would be like asking if a thermostat feels cold.


(Brilliant, this is the second time I argue against strong AI here, and only received downvotes instead of comments. That's not how this place is supposed to work.)


>>There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so. >That depends entirely on what you think a brain is

No. It depends on what you think a computer is. You see, the reason why many scientist today believe that a brain is computer or can be simulated by a computer is that computers are universal machines. In contrast to a mill, which can only mill grains, a computer can simulate earthquakes, the weather, cars, other computers, and, you guessed it, mills.

>People are ignoring a good lot of scientific method when they propose that were a computer to every simulate a brain the veracity of that simulation could ever be verified.

You seem to ignore the http://en.wikipedia.org/wiki/Turing_test.

>What would essentially be needed in the simulation, that is the production of consciousness, could never empirically tested.

How do you know that I am conscious? Or any other person you converse with for that matter.


>How do you know that I am conscious? Or any other person you converse with for that matter.

That's exactly his point, and that problem hasn't budged an inch since Descartes.


Thinking there is a difference between believing that a computer can simulate a brain and believing that a windmill can do so doesn't depend at all on what you think a brain is, or on what a brain actually is.

The case that was made by the parent is that since a computer can simulate a windmill, there is a distinct difference in the relative likelihood of the beliefs, not that either was actually true. Depending on the idea that the brain was a computer to prove that it could be simulated by a computer would be pretty much assuming the conclusion.


Consciousness as an object isn't even a scientific entity, it's something that can only be corroborated internal to the agent that has it.

If this were really true I would have no basis for thinking that other human beings were conscious, and thus I would have no reason to think that computers couldn't perfectly simulate other human beings.


>There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so.

Leibniz didn't say a mill can simulate the human mind. He's talking about any contraption that can the simulate the human mind (which is also capable of simulating many things given proper arithmetic and [infinite?] time) one of which would be, for the sake of illustration, the size of a windmill. You have to realize that dissection of the human body was still extremely taboo in the 17th century (not to mention technological impediments). You may also be missing the point.


it isn't a matter of metaphors but of abstractions. the brain as signal processor is a more accurate abstraction than the brain as city-state in that it makes better predictions.


> We obviously cannot (yet) explain consciousness so it is possible that it is impossible to create consciousness with compound parts, we just don't know.

Giulio Tononi's Integrated Information Theory of Consciousness addresses this very notion, and it seems to fit neuroscientific observations quite well.

---

Consciousness as Integrated Information: a Provisional Manifesto [1] --Giulio Tononi

Abstract: The integrated information theory (IIT) starts from phenomenology and makes use of thought experiments to claim that consciousness is integrated information. Specifically: (i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (phi) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections). Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience. Several observations concerning the neural substrate of consciousness fall naturally into place within the IIT framework. Among them are the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the distinct role of different cortical architectures in affecting the quality of experience. Equating consciousness with integrated information carries several implications for our view of nature.

---

Theoretical approaches to the diagnosis of altered states of consciousness [2] --Melanie Boly, Marcello Massimini and Giulio Tononi

Fig. 1. legend: Information and integration are fundamental properties of conscious experience.

(A) Information: the photodiode thought experiment. IITC states that consciousness is highly informative, because each conscious experience is implicitly discriminated by ruling out an infinite number of other available alternatives. According to the theory, the more alternatives you can rule out, the more informative is your conscious experience, and the higher your level of consciousness. This concept is illustrated when comparing a photodiode, simple light-sensitive device to anyone of us (here Galileo Galilei, for the sake of the example), facing a blank screen. According to the theory, the key difference between us and the photodiode relies in the fact that when specifying ‘‘dark,’’ the photodiode discriminates between only two alternatives, while we discriminate it from a large repertoire of other available percepts. This difference affects the meaning of the discrimination performed, and the amount of information generated.

(B) Integration: the camera thought experiment. By multiplying the number of photodiodes, like in the case of a camera, one can considerably increase the amount of information generated. The difference between us and the camera is that the information generated by each photodiode is not communicated to the whole system, that is, the information the systems generates is not integrated. This is reflected by the fact that if one would separate the camera in two parts with an infinitely thin line, this would not impair its function, nor diminish the amount of information generated. If the same procedure is applied to the brain, this will result in a split in two independent consciousnesses, similarly to what is observed in split-brain patients. Integration of information allows to perform a single discrimination at the scale of the whole system, in order to generate a unified perception.

---

A perturbational approach for evaluating the brain’s capacity for consciousness [3] --Marcello Massimini, , Melanie Boly , Adenauer Casali , Mario Rosanova and Giulio Tononi

Abstract: How do we evaluate a brain’s capacity to sustain conscious experience if the subject does not manifest purposeful behaviour and does not respond to questions and commands? What should we measure in this case? An emerging idea in theoretical neuroscience is that what really matters for consciousness in the brain is not activity levels, access to sensory inputs or neural synchronization per se, but rather the ability of different areas of the thalamocortical system to interact causally with each other to form an integrated whole. In particular, the information integration theory of consciousness (IITC) argues that consciousness is integrated information and that the brain should be able to generate consciousness to the extent that it has a large repertoire of available states (information), yet it cannot be decomposed into a collection of causally independent subsystems (integration). To evaluate the ability to integrate information among distributed cortical regions, it may not be sufficient to observe the brain in action. Instead, it is useful to employ a perturbational approach and examine to what extent different regions of the thalamocortical system can interact causally (integration) and produce specific responses (information). Thanks to a recently developed technique, transcranial magnetic stimulation and highdensity electroencephalography (TMS/hd-EEG), one can record the immediate reaction of the entire thalamocortical system to controlled perturbations of different cortical areas. In this chapter, using sleep as a model of unconsciousness, we show that TMS/hd-EEG can detect clear-cut changes in the ability of the thalamocortical system to integrate information when the level of consciousness fluctuates across the sleep–wake cycle. Based on these results, we discuss the potential applications of this novel technique to evaluate objectively the brain’s capacity for consciousness at the bedside of brain-injured patients.

---

[1] http://clm.utexas.edu/~compjc/papers/Tononi2008a.pdf Very theoretical/mathematical

[2] http://www.coma.ulg.ac.be/papers/vs/boly_PBR_coma_science_20... A more gentle introduction

[3] http://www.coma.ulg.ac.be/papers/vs/massimini_PBR_coma_scien...


I put a disparaging remark here, but I really shouldn't insult Leibniz based on one partial quote.

More politely, I think it wrong to say "compound substances cannot be perceptive therefore humans have supernatural souls". Brains and bodies are compound substances and have perception. The next step is "I don't understand how" not "tada, souls!".


There is an exactly analogous argument about the human brain and its cells; and I understand the Turing test as a thought experiment that shows it to be invalid in any meaningful sense. If a machine (or a seeming person on the end of a terminal) says they perceive and think, how are you to argue otherwise? You have no evidence other humans, let alone other machines, actually can perceive or think, other than how they seem to act.


IMO, the Turing test is conclusive to the extent one accepts Berkleyian idealism and abandons a belief in the necessity of other minds. Under those premises, solutions to the Turing test are, at this point, trivial as ELIZA long ago demonstrated.

http://en.wikipedia.org/wiki/ELIZA


I would argue that ELIZA, and the chatbots that have been programmed since, are not solutions to the Turing test.

Have you ever tried holding an actual conversation with a chatbot? They almost invariably fail the real-life Turing test within two or three messages. They parse natural language incorrectly, they can't keep track of the topic of conversation, and they often reply with non-sequitor errors.

Certainly, chatbots can appear to act with intelligence in certain situations. Given enough back-and-forth communication, however, a human will always realize that the bot is just parroting words and phrases that are statistically relevant to the human's questions.

I don't think it's unreasonable to define Turing test success as the ability to consistently fool ordinary human beings through intelligent communication- and that, so far, has not been accomplished.


>"Given enough back-and-forth communication, however, a human will always realize that the bot is just parroting words and phrases that are statistically relevant to the human's questions."

That however is not the Turing Test at all. Here is what Turing actually wrote:

    I believe that in about fty years' time it will be possible, 
    to programme computers, with a storage capacity of about
    109, to make them play the imitation game so well that
    an average interrogator will not have more than 70 per cent 
    chance of making the right identication after ve minutes of 
    questioning. 
http://orium.homelinux.org/paper/turingai.pdf


"Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for"

This is an incorrect conclusion. You can inspect individual grains of sand as long as you want, you will never find a beach. Yet beaches are composed of sand with no magic involved.


On the other hand a beach also is part of a shoreline, and thus you will not find a beach at Great Sand Dunes National Park

http://en.wikipedia.org/wiki/Great_Sand_Dunes_National_Park_...


What’s your point?


By "beach" we don't mean "many grains of sand." The parent analogy using grains does no more gets us a beach than using gears and pulleys or RAM and CPU's gets us a brain.


We mean many grains of sand in a certain configuration which are also in close proximity to many molecules of water in a certain configuration†.

Do you really argue that beaches have some sort of, well, “magic”, I guess, property which makes them a beach? That seems entirely unnecessary for something as simple as beaches. You can map all the grains of sand, you can map all the molecules of water and confidently identify when something is a beach. We know how to explain beaches in terms of grains of sands and molecules of water.

We cannot yet explain all properties of brains (most notably consciousness) in terms of, say, neurons but just because we cannot yet doesn’t mean we never will.

†What those configurations are depends entirely on your definition of “beach”. Sand or water might also no play a role at all, again, depending on your definition.


>Do you really argue that beaches have some sort of, well, “magic”, I guess, property which makes them a beach? That seems entirely unnecessary for something as simple as beaches.

And yet your caveat at the bottom illustrates that what a beach is isn't a trivial problem. Leibniz just wanted to avoid the regressive mechanical discussion by positing substances or monads as a solution to answering the question "What's a beach?". This isn't to say he's right than to say your trivialization doesn't work and that there's room for his speculation.


No, that’s just my standard “definitions are boring, let’s just agree to not discuss them”-disclaimer. Ah, well.

Definitions don’t make the problem hard. There is just no one true definition of beach, that’s fine. We can agree to use the one most commonly used or you can pick one, it doesn’t matter as long as we communicate the definition we are using.

I again and again find it astonishing that so many people have problems understanding what definitions are all about. They are about communication. Arguing about definitions is rather pointless, at least if you goal is to learn something about the world. (Good definitions make communication easier so there is room for argument about definitions but not really in this case.)


I'm not trying to defend Leibniz' argument so much as point out the space in which he can make it. I think that you may be confusing the two in responding to me as, let's be clear, Leibniz is writing in an utterly distinct intellectual environment from us. The pragmatism of communication as a standard for assessing representations does not exist in the 17th century as human cognition is still being assessed in relation to a possibility of omniscience that sometimes goes by the name of God. Communication, or inter-subjective experience, only becomes a standard after Kant. So monads or Leibniz' way of talking about substances is a way to negotiate the unique problems of conscious perception in relation to that possibility of omniscience.


>"Do you really argue that beaches have some sort of, well, “magic”, I guess, property which makes them a beach?"

I'm not sure "scientific" properties do much better for the sort of ordinary human purposes for which the property of beachness tends relevant, e.g. distribution of persons, water quality, presence of tiki bars, etc. In other words, replacing ordinary language with descriptive formalism throws the baby out with the bath water, i.e. no matter how accurate the description of neuron firings is, it will not touch upon what it is like to tuck one's child into bed at night.


Distribution of persons, water quality, presence of tiki bars – those are all equally scientific properties. Humans work with maps (or abstractions) they understand: All just a question of definitions.

No matter how accurate the description of neuron firings is, it will not touch upon what it is like to tuck one's child into bed at night.

Great strawman! I don’t think anyone ever claimed that.


It's not a strawman, because any meaningful claim of isomorphism between computers and brains depend upon there being a mappings between computer states and brain states, i.e. such claims rely upon Turing equivalency. Unless one continues to make up definitions as one goes along.

Incidentally, the claim that the distribution of Tiki Bars is normally a component of the scientific definitions of "beach" is the most patently absurd use you have made of a sophomoric definition strategy so far in this discussion.


So when you say "describing" you don't actually mean "describing", you mean "running the simulation"? That's alright then, but still leaves you with an entirely unjustified conclusion.

As to your second point, you brought Tiki bars up. I don't think Tiki bars feature prominently in the definitions of beaches in the scientific literature but that doesn't really matter. What matters is that the distribution of Tiki bars (and other properties they might have) is accessible to the scientific method. You can't claim that your definition of "beach" which apparently includes Tiki bars is not accessible to science (at least with the examples you gave for extending the definition).


>So when you say "describing" you don't actually mean "describing", you mean "running the simulation"

I mean absolutely nothing of the sort. What I am talking about is the ordinary use of the word "beach" in the ordinary grammar of the English language not its use in some game in which one attempts to translate ordinary terms with "scientific" propositions based a set of logical transformations which ignore the implications of context and connotation.

So long as one insists that the computer simulation is isomorphic to what it simulates, one is doomed to stay in the cave.


You are funny.


I found it to be quite useless to talk about the philosophy of mind to people on the internet. Either its a circle jerk because you already have the same oppinions, or you waste time talking about different things while calling each other retarded.


Let's own up to at least one basic fact: at least some portion of what we do is algorithmic -- computational -- in nature. Why? As we have this discussion, we're absorbing 1s and 0s. And we're spitting them back out in response on the other side... (what happens in the middle is left as an exercise to the reader. ;-)


algorithmic != computational

The recipe for boiled eggs is an example of a non-computational algorithm.


I'm not quite sure what point you're making, but I should clarify mine. I wasn't trying to say anything deep. I was just making a _very_ surface observation that, at some level, our brains consume, transform, and produce bits. It's almost tautological.

I abused "algorithmic" but not in the way you suggest: there are some well-defined transformations of bits that no algorithm [in the technical sense] can perform. I'll go a step further and suggest that our brains can't perform those transformations, either.


Wow. Leibniz constructed the Chinese Room Argument nearly four hundred years before John Searle or computers. Nice. :)


I constructed it in my head when I was in college. I think lots of people do that; it's not a very complicated idea at all. It's just that some people wrote it down and their writings somehow became famous.

When I constructed a similar argument, it was simply an argument I was having in my head with an imaginary person who was arguing with me that computers can have perception just like humans.


A homunculus is a homunculus, but nevertheless Searle's Chinese Room is perhaps a bit more radical - not so much conceptually but rather because it was proposed in contradiction to a well entrenched belief system - behaviorism.


clockless system design with NULL convention logic:

http://books.google.co.uk/books?id=UTHFcdvvHQcC


Cool! As wikipedia notes, it's harder to design things this way (and apparently tends to use more transistors). But thank you, I hadn't a name to search for before. :)

http://en.wikipedia.org/wiki/Asynchronous_circuit


Harder is not necessarily true. It depends on what you're trying to build.

In terms of QDI Async circuits, the main disadvantage is an increase in wires. In synchronous circuits, a slow enough clock ensures that the computation is completed (propagates through transistors) before moving on to the next clock cycle. For QDI async, the replacement is to have a handshaking protocol to ensure correctness as values pass through the transistors/computation. The handshaking increases the number of wires needed (and thus silicon area).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: