Hacker Newsnew | comments | show | ask | jobs | submit login

Sorry, not sure if I'm understanding you correctly. Are you arguing that there is more to consciousness than just the atoms in our brains? If so what is your reason for thinking this?



You missed the point of that well-put criticism.

Kurzweil can identify the basic components that make up a working human brain, and concludes that if you put together a similar assemblage of components, what you'll get is going to be a working human brain.

The problem is that we're not sure we have all of the right components, we're not sure of how those components need to interact, we're not sure of what subtle patterns have arisen through evolution to head off design traps that we're not even aware of.

In short, yes we may be confident that we are described by physical processes, and intelligence is an emergent phenomena. But it is by no means guaranteed that when we put our components together that we'll get will be intelligent. Or if it is, then how similar to us it will be.

So far what I've said can be dismissed as abstract hypothesizing (of course so can the bulk of Kurzweil's work), so let me give a concrete example to worry you. Humans have an innate ability at language. If you have deaf twins who never encounter a language that they can understand, they will invent one complete with a consistent grammar. (This experiment has been conducted by accident.)

Other primates have no such innate ability at language. We've managed to teach chimps sign language, but they have been unable to master grammar.

There is a gene, FOXPRO2, that has 2 point mutations between us and chimps. Humans who lack either of those point mutations have extreme grammar problems. There is therefore clear evidence that putting together a primate brain is NOT SUFFICIENT to get language. Whatever it is that FOXPRO2 does differently between us and chimps is necessary for language.

The problem is that we don't know what FOXPRO2 is actually doing differently between us and chimps. We've recently shown that if you put our version of FOXPRO2 into mice, they behave differently. We can catalog the differences but we don't know why that happens.

Now suppose that we wire something artificial together that we think should be similar in capacity to a human brain. Given that we don't know what FOXPRO2 actually is doing to our brains, what are the chances that our artificial model manages to capture the necessary tweaks that FOXPRO2 makes in brain function for language function?

My bet is, "A lot lower than Kurzweil would have people believe."

-----


You make a great argument, and I agree with you that it is very unlikely that a model which can only ever be an approximation of a human brain will ever exhibit intelligence.

My gut tells me that we'll have more success building an exact 1:1 copy of a brain, but from digital components rather than wetware. The discovery of memristors[1], and other yet to be discovered building blocks, may lead to just these types of advances.

If consciousness still doesn't emerge in such a 1:1 copy then that would be spooky.

[1] - http://www.frost.com/prod/servlet/cpo/205104793.htm

-----


Making a 1:1 copy of the brain without understanding its mechanisms is pointless, because there's no way to bracket the problem. You don't know how low-level you have to go: if there's only a single crucial quantum interaction, you have to model that level as well, increasing the complexity many orders of magnitude (if you have to model quantums or atoms, the simulation wouldn't even fit on earth I'm afraid). Similarly for the higher levels: it's obvious that brain development depends crucially on a stimulating environment (and meaningful interaction with it), so are you going to simulate a complete environment and upbringing? Reminds me of the Sagan quote "if you want to make apple pie, you first have to invent the universe."

In a similar vein, we have succeeded in copying (digitizing) the full human genome. But, to actually do something with that requires an even greater tasks of understanding what all those genes in it actually mean (what the proteins they express do). For the brain we haven't done the former, let alone the latter. I believe there's more point in trying to understand the working of the brain or mind on a more abstract level.

-----


I agree with you that an more abstract understanding would be more useful than a naive copy, however I think calling it "pointless" is a bit of an exaggeration. Imagine being able to make an indistinguishable 1:1 copy of your own brain at the exact moment at which you die, voilà immortality. Tad Williams wrote an interesting bit of fiction about exactly this called the Otherland series - well worth a read.

Not to mention that simply building such a copy would yield its own set of insights.

-----


I'm afraid 1:1 brain copies will stay in the realm of science fiction, so it's not an exaggeration. This paper provides some more food for thought on my position: http://www.mitpressjournals.org/doi/pdf/10.1162/coli.2009.35...

-----


Well, here we have an interesting demonstration of a quirk in the human mind. I guess you could call it a misattribution of word memory.

Your description of "FOXPRO2" interested me enough that I searched wikipedia for it after reading your comment, then found out FoxPro2 is a programming language and database management system, ha, a fitting misattribution for this crowd.

Then using google's implementation of a collective phrase memory(search suggestions) I determined you likely meant - http://en.wikipedia.org/wiki/FOXP2

-----


D'oh!

You're right, of course.

-----


I suspect the argument is more along the lines of, "The human brain does not obey the superposition principle." A sentence is just letters, but that doesn't mean that thorough examination of each letter provides you with the knowledge necessary to understand the sentence.

Anyway, saying "It's just atoms" is also misleading, because one also must content with the laws of physics which drive the atoms. Those laws are not yet thoroughly understood, and even with our significant current understanding we still have trouble programmatically predicting protein folding, to say nothing of the staggering complexity contained in the brain.

-----


Thanks for your answer, its given me something to think about.

A few things immediately come to mind though:

A sentence is just letters, but that doesn't mean that thorough examination of each letter provides you with the knowledge necessary to understand the sentence

I see what you're saying here, but consider this example:

Give someone a box of clock components and a clock from which to draw inspiration, and without any understanding of how a clock works or how the cogs and springs are manufactured etc, they will, given enough perseverance build a working clock.

This simple analogy illustrates that understanding exactly the functioning of each sub-component of a system is not necessary to be able to exploit its usefulness.

-----


I was primarily making an argument the other way around: I said that understanding the subcomponents (i.e. the letters) did not guarantee understanding of the whole (i.e. the sentence), while you're giving an example in which someone builds a whole, presumably by understanding and replicating the connections of major components, but does not understand the parts. More analogous to my point would be attempting to understand the workings of a clock by examining each gear.

Also, I don't know that your thought experiment holds water. I can certainly conceive of a universe in which a person never makes the logical leap from holding a clock and parts—or even having a thorough understanding of the workings of the subcomponents of a clock—to building their own. The pre-Columbian New World civilizations, for example, had all the resources to build wheeled vehicles, and certainly understood the principle behind them enough to build wheeled toys or use rolling logs to transport large objects, and the Inca Empire even had a sprawling complex of roads—but never in their long history did the notion of a wheeled cart occur to any of them. Which is to say: the search space of ideas is vast, and one can't reasonably be expected to exhaust them all even with help.

-----


That's not really the point I was making. My point is that you don't have to understand the workings of a clock (or its subcomponents) to be able to build one given the subcomponents and an example from which to work. Simply mimicking exactly what you observe will produce the desired outcome.

There are lots of examples where the underlying workings are not understood, and yet useful work is done. Look at medicine for example. For thousands of years people knew that if you mix herb A and B and boil them for time X you get something that fights infection or helps with headache. The underlying biochemistry doesn't need to be known or understood to be able to follow the steps to get the desired outcome.

The same might hold true for the brain. If we can catalogue all the connections and information pathways (chemical, electrical), we may not need to be able to explain how every combination of subcomponents interact, but, we may know that a particular arrangement gives the outcome we're after.

PS: "the search space of ideas is vast, and one can't reasonably be expected to exhaust them all even with help."

That's true but in this example we're not searching for anything. Over millions of years one solution to intelligence, from an infinity of other possibilities has already evolved. Each of us already has a working version of what we want to replicate in our own skull. Now we need to tease out all of the cogs and springs and how they are arranged, arrange them in the same ways, and unless there truly is a metaphysical component we will have something indistinguishable from human intelligence/consciousness.

-----


I'm not the author of the comment you'replying to but I'm pretty sure that consciousness is more than atoms. Feeling pain can't be explained as an interaction of physical particles. You could describe the chemical processes in the brain accompanying pain to very high detail. But at no point you can't explain why does it hurt so much when the atoms in the brain are in a specific state.

-----


Feeling pain can't be explained as an interaction of physical particles.

Because no such explanation can exist, or because we don't have a sufficient understanding yet? See "God of the gaps".

-----


Because physics doesn't even have the language to describe the feeling of pain. Similarly, physics doesn't have a language to describe how "green" looks, it can only describe the frequency of green etc.

Also have a look at http://en.wikipedia.org/wiki/Qualia, it's basically what I'm talking about.

-----


Feeling pain can't be explained as an interaction of physical particles

Do you mean physical or emotional pain? I imagine that physical pain is very well understood and can be explained in (maybe) all of its entirety as a purely material (as in atoms) phenomenon.

I personally don't see any reason why emotional pain can't be explained in the same way i.e. without requiring a metaphysical component.

-----


Both, it doesn't really matter, basically any state of consciousness. Physical processes in brain accompanying physical pain may be understood very well. But we don't understand the mechanism that translates the physical state of brain matter to the hurting feeling. Why does it hurt, when we arrange brain's atoms in a specific way?

-----


Your explanation doesn't make sense. We know that there are people who

a) don't exhibit emotional pain or very little compared to "normal" (sociopaths/psychopaths)

b) are autistic and have "...difficulty with “subtle emotions like shame, pride, things that are much more socially oriented” [1]

c) people with Congenital insensitivity to (physical) pain, some of whom experience the condition due to excess production of endorphins in the brain

You seem to suggest that the question "but why does it hurt" is somehow mystical or metaphysical. I don't think it is. It hurts because your brain is wired so that it does. If it is wired otherwise (as in some people it is) then it doesn't hurt.

[1] http://bigthink.com/ideas/do-people-with-autism-experience-e...

-----


Yes but the question is, why does it hurt when the brain is wired in a specific way. What makes some positions of brain's atoms painful? Physics doesn't even have a way to scientifically define what "feeling pain" is. Physics can only define physical processes accompanying feeling pain.

-----


"why does it hurt when the brain is wired in a specific way?"

Because organisms that evolved to avoid that thing we call pain survived long enough to pass on their genes, whereas those that didn't got burned up, crushed or in other ways extinguished?

In other words an aversion to that thing called pain gave an evolutionary leg up. And it seems that this aversion is fundamental to life since every creature has a tendency toward self preservation.

There are many examples of people who enjoy physical pain and perform cutting even down to bone, and sometimes including amputation. Clearly this is an example of brain wiring which is not beneficial to the individual and, in extreme cases will self-filter out of the gene pool.

-----


Why can't we explain why it hurts so much?

We can describe the entirety of an operating system down to its individual ones and zeroes. Do you posit that some irreducible "Tuxness" state exists that computer scientists just refuse to acknowledge?

-----


The unpleasant feeling of pain is a very real phenomenon, unlike tuxness. It can't be reduced to smaller parts. All that we can do is say "When we arrange brain's atoms like this, it hurts."

We could construct robots, that react to pain exactly like humans (screaming, sweating, ...) without feeling anything. Why aren't we like that?

-----


“Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.” Max Planck

-----


What is this ultimate mystery?

If you posit "why anything exists instead of nothing", then I will have to ask whether "why" even makes sense in this context.

From what we now know of quantum physics, "nothingness" itself might be either less likely than having anything or perhaps nothing but a fiction born of our mental heuristics.

-----




Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: