Hacker News new | past | comments | ask | show | jobs | submit login
Why Minds Are Not Like Computers (2009) (thenewatlantis.com)
33 points by AndyBaker on March 27, 2014 | hide | past | favorite | 66 comments



I could smell the "Chinese Room" argument coming from the first line of that article.

Of course the real answer is still and have been for a long long time. "We don't know"

And even though he seems to be trying to expand on the "Chinese Room" argument a little it still does not prove anything.

Of course the person in the room does not know Chinese as little as each individual neuron in my brain "knows" Chinese, it's the whole system that knows Chinese.

And if that is not enough for you you have to explain this.

If seemingly "dumb" matter can combine to become smart humans, then whats to hinder silicon to build so much complexity that a mind emerges?

No the real answer is still "we don't know"


We don't have the computer power to simulate a brain quark-for-quark yet; when we can it'll be interesting to see what happens. Until that happens, it's all moot anyway as it's a religious argument. For me it's clear that brains are computers, just not the ones we are using right now. This has been obvious since the first time I touched a computer 30 years ago and I didn't grasp why we still needed humans for anything at that time and that is still true kind of today. The fight is strong against weak AI and I firmly am convinced strong will happen; there aren't too many good arguments against it. Most arguments now are lack of comprehension or religion (some form of saying there is more than just this but failing to explain what/how). And this article isn't even an argument.


The arguments in the article (and repeated here by some HNewsers) are familiar: "If a computer does it then it's not a mind." Well, if that's the definition of mind for the sake of discussion then there isn't much to discuss. I see it differently, I feel that mind is a property of a working brain and that someday we may say that mind is also a property of a working AI program.

I don't believe in magic. I don't believe that there is spooky stuff in my head. I think that my brain is an amazing biological machine. The idea that there is some fundamental insurmountable barrier that will prevent strong AI from succeeding (eventually, if humans survive) would mean that there is some magical property of brains (human brains, whale brains, chimp brains, dog/cat brains etc.) that we will never have access to. This seems so unlikely that I prefer to believe that there is no magic and that AI will succeed. And I'm not saying that we need to mimic brains to have thinking machines. The mind and its thinking, feelings, emotions, and so forth may simply be emergent properties of advanced enough artificial or natural computing structures (e.g. brains).

If space aliens landed on earth tomorrow, would we say "Oh, but they can't think. They don't have brains like we have here on earth so they don't really think. They just process inputs into outputs."

I've enjoyed many books on this subject, but two older ones I recommend are: Gödel, Escher, Bach: An Eternal Golden Braid by Hofstadter and Society of Mind by Minsky.


My the article rattles on a bit in the typical way of philosophers where it's hard to analyse what's up with their arguments because they lump about a hundred of them together. But I think there is a basic straw man argument in implying AI researchers think of the mind as software. I think a more reasonable model in AI is for software to simulate the computational action of the neurones and for the mind to be analogous to the data that the software or neurones operate on. If you believe the brain is a machine following the laws of physics then this should be possible. The proof is in the pudding. Researchers have already been able to produce functional simulations of the retina and parts of the hearing system and get similar signals out which can then be fed back in to the brain successfully as seen in the contraption on Rush Limbaugh's head.

Whether that extends to the rest of the brain we will see before long. The Blue Brain / Human Brain Project has a €1bn grant to get cracking on it. Time will tell.

http://www.theguardian.com/science/2013/oct/15/human-brain-p...


Statistical analysis approach can be a good educated guess at most. But if we keep observing human brain to get input/ouput dataset, we can develop a field that much like Physics, where the theory comes from experiments and even those do not must need future experimental proof to be considered valid. Yes, not all these theory based on expriments is true (that is why I say it is a good educated guess) because maybe our observed set is just a subset of all test cases we need. But using this approach like Physics we can keep pushing the limit of (AI) theory and expriements forward.

As I mentioned another half (theory first, then experimental validation) of this approach is another Einstein came along and made amazing thoeries once and for all. Then the expriments proves it is true to our best knowledge up to date. The the loop starts again.

I think the first cases may be the most prevalent because insights are not very easy to come by without seeing experimental result first. If we can make a organized way to keep working on the first half, the second half might come by some time. Just some random thoughts, primarily made to inspire some followup conversations amongst us.


The argument in this paper comes down to "computers can't think because I don't label what they do as 'thinking'".

Now if you make the argument that we're very unlikely to build a conscious machine without knowing what consciousness is, and simply assuming that it's an emergent property of complexity is probably foolish. OK. Maybe. Who knows?


This seems to be the standard response of philosophers who take an anti-AI stance - define what ever computers do as not thinking and therefore computers can't think. The major problem with this argument is it can be applied to philosophers just as easily.


Is brain a machine? If you don't believe in souls and that kind of thing - then the brain is material and thus a machine for me.

Can you replicate the way brain works in another machine? If it is a machine than most probably yes.

Can you replicate the way brain works in a digital computer? It is possible that there are physical processes going on in the brain that cannot be feasibly approximated by digital algorithms (i.e. it is possible that there are analog machines that are for some problems faster than any digital computer can be). There used to be analogue computers and it is entirely possible that some day they again will be useful.

Can you build a brain without knowing how it works? Well we know how to build a brain without knowing how it works - you need a woman and a man and ...

What is the definition of artificial then?

Can you control AI if you don't know how exactly it works? Probably not - just like we cannot really control humans, but we have pretty good ways of influencing them.


It feels like this article sets up something of a straw man. The analogy that a mind is like a computer is just that, an analogy. It's used to present the high-level structure of the brain in a accessible, understandable way, not as a logical argument.

This analogy is not the justification for why people believe that "the mind can be replicated on a computer". And yet the article tries to disprove the latter, deep and meaningful point by attacking the relatively superficial analogy between [current] computers and minds.

To me, the reasoning behind why I think a computer can theoretically do anything a brain can is much simpler. Programs are ultimately nothing more and nothing less than mathematics. Their capabilities are a finite (but theoretically arbitrary) subset of what mathematics models. And so far, mathematics has been able to model everything. Purely inductively, it seems mathematics should be able to model the mind just like everything else. After all, there is nothing magical about the brain!

At the limit of plausibility, we could just reproduce a mind by modelling a brain using the laws of physics. Sure, this is wildly impractical with current hardware. Perhaps we will never have the hardware to make it practical. But it also demonstrates how it should be possible theoretically: and if something is possible, no matter how impractically, there's a reasonable chance that it's possible in some significantly easier ways.

Saying that a computer can never fundamentally replicate a human mind is the same as saying the mind is beyond mathematical description. To me, this idea does not make sense. I'm not even sure it's well-formed! Mathematics is not a finite set of ideas or a bounded language: it is more like a standard of formality which we can extend in an incredible amount of different ways.

Sure, math has some internal limitations like incompleteness. But these are not really absolute bounds, at least not in the most obvious way. One thing to keep in mind is that mathematics is purely self-contained: all that matters is internal consistency, not a parallel to anything in our actual universe. This means the fundamental limitations are inevitably meta-limitations: they come up when we turn math on itself. Moreover, they only come up when we start encountering the infinite. The human mind may present some meta facets, but it is not infinite, so incompleteness cannot prevent us from modelling it.

It's interesting to note that all the actual arguments for why a computer cannot represent the mind are not actually logical themselves. They rely on vague analogies and emotions, quoting Ionesco but not providing actual reasoning.

Ultimately, it feels like this article is trying to establish a false balance between the two sides of the argument. But they are not anywhere near equal: the people that don't think a computer can model the mind need to present an actual case for why it's beyond the capabilities of mathematics, the system which has fared so well in describing everything else. Where does the line between physics and cognition (which exists on a purely physical plane) come from? Why are they divided? To me, that line simply seems to arbitrary to accept without more cause.

I can see why this line of thought is appealing: consciousness certainly feels magical, so you want to find some real magic there. But I am not convinced real magic exists. Or even makes sense. It would require the human mind to somehow be special in a way that would, at best, be a really suspicious coincidence. I'm not buying it.


It's just the Church-Turing Thesis all over again, with a dash of metaphysical nondeterminism.

So far, everything in reality seems to be "made of" either determinism or randomness. There might be a third option (and options beyond that), but we haven't seen one or thought of one yet.

So far, computation can model determinism perfectly well (after all, computation is just very orderly, bottled-up determinism harnessed to a predictable human purpose!), and model randomness as well by drawing random bits from an entropy source. When you get deep enough, mathematics is in fact reducible to computation (paging Robert Harper). Given this, the Church-Turing Thesis says that our mechanical computers ought to be able to model anything reality is made of: either reality is theoretically Turing-recognizable given enough compute-power, or reality contains super-Turing phenomena that we could exploit to build super-Turing computers.

Thus, if we accept a couple of assertions that have held up decently so far, it looks as if all parts of our prosaic, everyday reality ought to be amenable to computational modeling, provided we find the right model and throw enough flops at it.

They are assertions, but they've also held up for a damn long time, and we would need some extraordinary evidence to accept the extraordinary claim that those assertions don't hold.


I'm no expert myself, but I thought the Turing-Church thesis implied the exact opposite. Turing machines and lambda calculus are widely assumed -- though not proved -- to be the limits of what can be computed. And yet, in a Gödelian fashion, for a given computer (equiv. set of finite axioms), you can always formulate deterministically true statements that cannot be computed.

Note that the formulation of these are counter-examples and thus assert a fundamental limit of computation, but it says nothing that there might be many other things that are non-computable that do not fit the form of these counter-examples.


Hence why I'd said Turing-recognizable rather than Turing-decidable.


> Church-Turing Thesis all over again

I've heard it referred to as the Church–Turing–Deutsch principle [1], since Deutsch talked about the physical version of it.

I think it's interesting that the assumption that nature is computable comes so naturally, even though as far as I can tell it's entirely unclear whether it is or not. In particular, Deutsch points out that if the universe needs reals that includes a whole bunch of noncomputable numbers, and in fact our favorite model of physics for a significant chunk of history ('classical physics') did require reals and so was noncomputable.

I also tend to assume that nature is computable, but I don't think these assertions have really "held up for a damn long time".

As well as the fact that we managed with not much of a model followed by a noncomputable one for most of history, it seems to me that there might be a bias towards finding computable models over noncomputable models, which makes any reasoning based on observing that we've found a bunch of computable models suspect.

[1] https://en.wikipedia.org/wiki/Church-Turing-Deutsch_principl...


Interesting argument. Is it fair to summarise it as: (a) Mathematics models the deterministic and random aspects of reality; (b) Mathematics reduces to computation; (c) Church-Turing says that all computation is equivalent; (d) Therefore a Turing machine[2] can fully model reality.

"When you get deep enough, mathematics is in fact reducible to computation (paging Robert Harper)."

Could you elaborate on that, as it seems to be a big jump? I'm not familiar with Harper's work (I'm assuming you mean this [1] Robert Harper). I'm also not a mathematician...

[1] http://en.wikipedia.org/wiki/Robert_Harper_(computer_scienti...

[2] Or, presumably, Lambda calculus or Godelian recursion.


This Robert Harper, indeed:

http://existentialtype.wordpress.com/2011/03/27/the-holy-tri...

http://www.doc.ic.ac.uk/~gds/PLMW/harper-plmw13-talk.pdf

Types are propositions! Programs are mappings! Computation is proof!

These are the truths of this world! Submit to those truths, you webdevs in human clothing ;-)!

(I'm always hoping someone will catch the obscure references I throw into my comments, but alas.)


I don't think the article is saying that the mind cannot be replicated by a computer. It's questioning the use of computational concepts to model human intelligence.

To make the same point differently, if you wanted to replicate human intelligence in its entirety (rather than creating something capable of performing some well-defined task), AI is far too high a level of abstraction. You might be able to replicate a universe and wait for intelligence to evolve, but you still wouldn't understand the intelligence you had created.

The danger of the hardware/software analogy is that we deliberately separated computer systems into (mostly) well-defined and cleanly separated layers. Given how hard it is to avoid leaky abstractions in computer systems that are: (a) vastly less complex than human intelligence and (b) designed to be layered, the systems responsible for human intelligence probably don't expose the clean API required for us to understand them using high-level concepts.


Yes, thank you. So many people here are missing the point. The debate isn't whether or not minds are simply too special to be modeled by a computer. The debate is whether or not the mind itself is a computational system

I mean, consider the solar system. Can the solar system be modeled by a computer? Yes of course. Is the solar system a computational system? Is it organized on computational principles? Of course not. These questions are not the same thing, and yet conversation about the latter always seems to get drowned out by people saying "How could you possibly deny the former!"


To me the most convincing discussions of artificial or engineered minds start with:

"human minds exist"

Therefore, whatever it is that minds do is possible.


Both you and the parent have assumed without evidence that the mind operates by local physical laws. There is certainly no evidence to the contrary, but an absence of evidence is not evidence.

Religions might be right and the mind is actually the soul.

Or we could all be characters in a virtual world "played" by beings operating us. This world could be an elaborate RPG.

Or something else we haven't figured out yet.

So don't confuse your belief with proof. Your argument has demonstrated nothing except a failure of imagination.


There are no proofs outside pure mathematics; science works based on justifiable beliefs. Absence of evidence is evidence of absence; we don't need to prove unicorns don't exist to ignore their potential existence in our model of the universe. The null hypothesis is that brains are not magic.


I disagree, though not with absolute confidence. Incidentally, i love how fast AI discussion degrade into existentialism.

Even if minds need unknown 'laws' emerging from dualistic nature of existence, they still exist.

Whether we are human beings in a world that is as it seems, characters in an RPG game, brains in a vat, souls tortured by a demon, artificial minds somewhere in a cascade of simulations or dreaming Hindu gods, we are (or rather I am) still minds under the des cartes definition of a mind and that mind exists.

As for not confusing proof with belief… Occams razor has value. Even if the religious view of souls or such is possible, it is not based on any knowledge or information. It's made up. It has a kind of a longshot chance of being correct in the same way early philosophical claims that matter is made up of wind, fire, earth and water were had a longshot chance of being correct.

In any case, minds seem to have a lot to do with brains and brains are nice squishy physical things that can be prodded and examined and haven't yet been seen to do stuff that is impossible in a material universe.


Taking that to its argument ad extremum conclusion: The universe exists therefore we can build one.

It's quite possible that we can't engineer a computer that does the same thing as the human mind because of our own limitations - the primary one being time.

That's no reason not to try though.


Hehe. A few of differences (:

*We can build one" is not necessarily the conclusion unless we're accepting pretty abstract use of the terms we and can.

To build a universe inside this universe, universes must be capable of containing universes. To build one outside it, stuff must be capable of existing outside of it. OTOH, it seems like multiple minds can and do exist. It gets pretty loopy (and awesome) if we keep going down this rabbit hole.

The statement is intentionally tautological, or very very close to it. If minds exist then minds exist so there's no point in arguing that they can't exist.

I really wish Douglas Adams or Alan Watts or one of those guys would jump in this thread and explain.


The analogy that a mind is like a computer is just that, an analogy. It's used to present the high-level structure of the brain in a accessible, understandable way, not as a logical argument.

Actually, this analogy is the foundation of our modern understanding of the mind and is accepted by nearly all experimental Psychologists/Cognitive Scientists. It is certainly pervasive enough to be arguing against. Few of them seriously recognize its limitations.

This analogy is not the justification for why people believe that "the mind can be replicated on a computer". And yet the article tries to disprove the latter, deep and meaningful point by attacking the relatively superficial analogy between [current] computers and minds.

I think you're missing the point. No one cares whether the mind can be replicated on a computer. Researchers/theorists care about whether the mind is a computer. The solar system can be modeled on a computer, and yet there isn't an entire theoretical approach to astronomy predicated on uncovering the algorithms running on the solar system computer. This line of discussion about the capabilities of math and computation are beside the point.


I think the issue is that AI folk believe that given enough computer power they can somehow replicate all the brain can do and create the capacity for consciousness and qualia in what is essentially a machine doing mathematics. This assumes that a certain level of computation suddenly can give rise to consciousness, which is currently the least understood part of our biology.

But there is no evidence that we can do that. Maybe it is possible, but there is as yet zero data, nor a proven theory that says computation leads to consciousness. Of course, you might say that is like attributing a certain magical underpinning to consciousness and there is no reason to believe magic exists. But is there a logical, rational, tested theory as to why a certain level of computation should and will lead to consciousness?

It's a claim that is made, I think, with a simular amount of emotional and irrational vigor as those that say that AI will never be able to mimic the brain/mind/consciousness.

Personally, I think, if it is possible, it won't happen any time soon.


There is no evidence of the alternative either.

And again. If "dumb" matter can become aware (us) what's to hinder AI to become?

The real answer is still. "We don't know."


Minor nitpick really but maths is a subset of computation. This is made most obvious by it's requirement of consistency. Gödel showed that you can't cover all possibile values and preserve consistency at the same time, but computation doesn't have this restriction.

I think of maths as a bit of a mission to carefully extract the parts of the computational universe that can be reasoned about. There's heaps more kinds of processes that still run just fine (on a Turing machine, say) but whose outcome can't be predicted using less work than any other universal machine would take.

Bit off topic I guess but to me, this idea has great potential for a basis of the aspect of consciousness that's like magic. It could be irreducibly complex and inconsistent computation. Still perfectly valid, but un-analysable.


I have a totally opposite view, I think it's almost certain that there's something very special about mind / consciousness.

For example, notice that the content of your conciouness is the only thing that's certain. Law of physics can change. Or you might wake up from matrix (recursively multiple times). Basically anything can happen. But the fact that you feel good (or bad) at this moment is certain. Mind is something like a window to the physical world.

Also, consciousness is subjective, physics cannot describe it, not even in theory.


"notice that the content of your conciouness is the only thing that's certain".

Tell that to someone with multiple personalities. Tell that to someone who has brain injuries reducing them to a vegetable. Trauma applied to the brain directly and (to an extent) predictably causes changes to the consciousness, making its contents anything but certain.


I think you've misunderstood what I meant. For example, when I see a lion, I can be certain about it. It may be a hallucination and suddenly disappear but that doesn't change anything, I really saw the lion.


No you saw something that you interpret as a lion. Thats a very different thing.


I was refering to the experience of seeing, to the state of my mind. In other words, when I feel pain, I know with absolute certainity that I currently have this unpleasant feeling that I call pain.


Which does not mean that what you experience is pain.


Why?


Your last statement is only true until it isn't.

(Imagine a device that projects some unknown field at you and can predictably manipulate your subjective experience; it must have a sufficiently complete model of your consciousness and it is difficult to say much more than it is unlikely to be possible)


It sounds like you're a philosophical idealist. Have you read Berkeley? He argues that it's insane to believe in a 'material' world, since we can never experience that nor have evidence of it. What we experience are ideas.


Hes not saying that the mind can't be replicated. Hes just suggesting that the approach is wrong, that many of the claims routinely made by AI are not supported by empirical observation. For example, that the neuron is a unit of the brain. There are so many falsehoods just accepted, left-brain, right-brain personalities for example. Hes not some dualist, resorting to spiritual explanations.


Yes, criticising the current popular models is valid, but doesn't suggesting that strong AI is impossible in theory allow a dualist or mystical view?


I haven't read that article (yet) but I strongly agree with the title. I've been thinking about this for years and have come to the following conclusions:

* Physics will never be able to explain consciousness. That's because consciousness is fundamentally subjective and therefore out of the scope of science. This is very counterintuitive, because science was able to explain almost everything so far, why would consciousness be different? One way to answer that would be that consiousness has a "meta" relation to science, it's one level above science (in a sense, don't take it too literally).

* The Descarte's mind-matter duality model is correct in principle.

* The solipsism theory is about right too.

(I'm a very rational and sceptical percon btw.)


We have gone leaps and bounds since we started building an "artificial brain". Look at where deep learning is leading to. Ofcourse, we can't mimic the brain in all its majesty anytime soon but short of that, there's so much we can achieve.


i don't necessarily disagree with your conclusion, but the argument is that a computer is a different thing to a mind. So, even if a really powerful computer could convincingly mimic a mind or otherwise achieve a similar result it would still be a different thing.

For me a good way of thinking about this is the discovery of genetic heredity. Heredity was known for a long time before DNA was known. Eventually evolution became understood too. But, before DNA was discovered we didn't know what a trait "looked like" in much the same way as we don't know what a thought or memory "looks like."

Without that knowledge it's hard to know if a computer is mimicking a mind or being a mind.


I'd take the argument to the physical level. Do we know how the brain works? To some degree, yes. Neural networks: nodes, weights and activation functions.

If one day we can map exactly an animal brain's architecture, how new ideas are learned etc. (is it backwards propagation or something else?) then we can replicate that behaviour using logic gates and floating point numbers.

Mapping a brain to a digital neural network architecture is such a prohibitively expensive problem I wouldn't know where to begin. Perhaps a biologist would have a better idea than a computer scientist. But it's enough to say that intelligence can theoretically exist on electronic circuits.


Human brains are made of the same fundamental particles as computers or anything else in the universe, and they obey the same laws of physics. Thus I believe that it is only a matter of scale.


The fundamental difference between minds and computers, which many people don't understand, is that computers are modelling things. They are calculating outcomes from preconditions. Minds on the other hand, while they might also be doing that, are experiencing things. They feel, see and hear. You might make the argument that we don't know whether a computer is doing that as well, because it can't tell us - but then I could just as well propose that minds are like popcorn.


The person experiencing (using the brain) is always outside the realm of matter.


I would have thought it was obvious that minds are not like computers, the really interesting question is are minds Turing machines?


How would you go about proving that?

Also, is this something that has to be resolved before big advancements can be made in strong AI?


It is not provable and it doesn't matter. It really doesn't matter if an AI is really thinking or not - all that matters is can it simulate thinking to a level that we can't tell the difference.


Given that you can emulate a turing machine in your mind, I'd assume that it is more than a Turing machine.


Any Universal Turing Machine can emulate any other Turing machine. That's not interesting. What would be interesting would be if human minds could solve the Halting Problem.


This would prove that humans are not Turing machines - so far no human has solved this.


It is not proven that a human mind can emulate all Turing machines (given an arbitrary amount of external storage). All physical computers are "just" finite state machines (because they don't have unbounded storage); you can emulate a subset of Turing machines using an FSM just fine even though a TM is strictly more powerful than an FSM.


But no infinite tape. I can't remember if that's essential to the definition of a Turing machine?


Obviously no physical machine can have an actual unbounded tape (note that unbounded is not the same as infinite!) The interesting question is whether the brain is Turing-equivalent if given a notebook with an unlimited number of pages to keep track of the program state.


I don't think this is actually known.


So then it has the power of (at least) a universal turing machine.


Why minds are not like computers? I guess computers are being build trying to mimic humans. So the question has to be "Why computers are not like minds?"


Computers are based on Boolean logic and algebra, Human minds are based on we don't know what. Computer logic is a subset of the Human Mind.


Mind is not a computer. It's actually two computers, maybe three. They co-evolved, each with different goals, and they work together as a team, with all the competition and cooperation that it entails.


That, as the definition of computer goes now, is still a computer. 1-2-1000 doesn't matter.


I feel like I should share a perspective from Cognitive Science/Psychology.

In theoretical/experimental Psychology, the dominant paradigm is a computational/representational paradigm. Take vision for example. The accepted facts are that we receive as input an impoverished view of the world, insufficient to know what's really out there. So we have to take that input and build upon it, based on assumptions and past experiences and what-have-you, until we have an internal representation of the external world. And then we can reason with this internal representation, we can refer to it when planning actions, etc. So in this view, we are not really in contact with the external world, only our reconstruction of it in our heads.

This view is probably familiar to you in some form. I am part of a group of scientists pushing an alternative view, however within Psychology we are considered fringe for questioning this dogma. We take a non-representational view of the mind. Going back to the vision debate, if you assume a stationary vantage point and a single snapshot of an "image", and if you assume that the final "output" of visual cognition is a representation in 3D-coordinates, then yes, the visual input is underspecified. However, if you assume a moving point of observation, if you realize that the really rich information is not in the snapshot but in the way the light changes over time, if you realize that in order to successfully control actions you don't need a full 3D map of the world, then there is enough input. Some of what we do is to work out mathematically that the information is there to support certain actions, then to demonstrate experimentally that indeed, people do seem to use these "shortcut" strategies that don't require intermediate representations.

Of course, there's a lot more to it than that. I could take about thermodynamics, self-organization, and lots of other interesting stuff. But what I wanted to show is that the debate about computation is alive and well within Psychology, and indeed the computation side is extremely dominant. It may take the tack of representationalism vs non-representationalism, however the representational theories are firmly computational. Research has the explicit goal of figuring out what is the storage/transmission format of these representations? What operations are performed on the input to create them? What operations are performed on them to use them? Etc.

Also, despite what some of the comments here suggest, at issue is not whether or not computers can model a mind. All the behaviors that we think are done without representations? We model them and study them with the aid of computer models. Of course. But that's not really interesting at all. And yes if you modeled a brain physically, you might get a mind (I'd argue you would need to model the body as well, not to mention quite a bit of environment). But that's not really the point. Today, Psychologists do research with the idea that they are setting out to discover the software that the brain is running. This is very different from the claim that a computer could model a brain, and it pervades how we think about minds, even (and especially) among experts in the field.


Are minds are not like computers because we humans have 'emotions.'

Emotions affects the mind's performance and what memories are activated.


You're assuming emotions are some kind of magic that happens outside of your neurons and synapses. They don't. Provided we can accurately simulate neurons, synapses and how they interact in an accurate enough way such machines could have emotions. The challenge is obviously in achieving the above.


Quoting Candace Pert, author of 'Molecules of Emotions':

"For decades, most people thought of the brain and its extension the central nervous system as an electrical communication system . . . resembling a telephone system with trillions of miles of intricately crisscrossing wires." But new research techniques for studying peptides and receptors show that only 2 percent of neuronal communications are electrical, across a synapse. In fact, she writes, "the brain is a bag of hormones." And those hormones affect not only the brain, but every aspect of body and mind; many memories are stored throughout the body, as changes in the structure of receptors at the cellular level. "The body," Pert concludes, "is the unconscious mind!"

From: http://www.smithsonianmag.com/history/review-of-molecules-of...

Emotions seem to occur outside the brain.


> "Emotions seem to occur outside the brain."

There isn't a clear scientific definition of what an emotion is, which makes discussion difficult. Having said that, they are not solely outside the brain. There are a number of views about human emotion and they do discuss the brain-body interaction (and feedback loops).


> Emotions seem to occur outside the brain.

But still within the body, and thus within this universe - in any case it's just a matter of definition of what the "brain" is, and we could just as well say the whole human is a brain, and emotions are just an effect of the "computation" that it performs. Whether this computation is electrical or chemical (hormones) doesn't change the fact that it's based on the same laws of interaction as everything else in the universe.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: