Hacker News new | comments | show | ask | jobs | submit login

You missed the point of that well-put criticism.

Kurzweil can identify the basic components that make up a working human brain, and concludes that if you put together a similar assemblage of components, what you'll get is going to be a working human brain.

The problem is that we're not sure we have all of the right components, we're not sure of how those components need to interact, we're not sure of what subtle patterns have arisen through evolution to head off design traps that we're not even aware of.

In short, yes we may be confident that we are described by physical processes, and intelligence is an emergent phenomena. But it is by no means guaranteed that when we put our components together that we'll get will be intelligent. Or if it is, then how similar to us it will be.

So far what I've said can be dismissed as abstract hypothesizing (of course so can the bulk of Kurzweil's work), so let me give a concrete example to worry you. Humans have an innate ability at language. If you have deaf twins who never encounter a language that they can understand, they will invent one complete with a consistent grammar. (This experiment has been conducted by accident.)

Other primates have no such innate ability at language. We've managed to teach chimps sign language, but they have been unable to master grammar.

There is a gene, FOXPRO2, that has 2 point mutations between us and chimps. Humans who lack either of those point mutations have extreme grammar problems. There is therefore clear evidence that putting together a primate brain is NOT SUFFICIENT to get language. Whatever it is that FOXPRO2 does differently between us and chimps is necessary for language.

The problem is that we don't know what FOXPRO2 is actually doing differently between us and chimps. We've recently shown that if you put our version of FOXPRO2 into mice, they behave differently. We can catalog the differences but we don't know why that happens.

Now suppose that we wire something artificial together that we think should be similar in capacity to a human brain. Given that we don't know what FOXPRO2 actually is doing to our brains, what are the chances that our artificial model manages to capture the necessary tweaks that FOXPRO2 makes in brain function for language function?

My bet is, "A lot lower than Kurzweil would have people believe."




You make a great argument, and I agree with you that it is very unlikely that a model which can only ever be an approximation of a human brain will ever exhibit intelligence.

My gut tells me that we'll have more success building an exact 1:1 copy of a brain, but from digital components rather than wetware. The discovery of memristors[1], and other yet to be discovered building blocks, may lead to just these types of advances.

If consciousness still doesn't emerge in such a 1:1 copy then that would be spooky.

[1] - http://www.frost.com/prod/servlet/cpo/205104793.htm


Making a 1:1 copy of the brain without understanding its mechanisms is pointless, because there's no way to bracket the problem. You don't know how low-level you have to go: if there's only a single crucial quantum interaction, you have to model that level as well, increasing the complexity many orders of magnitude (if you have to model quantums or atoms, the simulation wouldn't even fit on earth I'm afraid). Similarly for the higher levels: it's obvious that brain development depends crucially on a stimulating environment (and meaningful interaction with it), so are you going to simulate a complete environment and upbringing? Reminds me of the Sagan quote "if you want to make apple pie, you first have to invent the universe."

In a similar vein, we have succeeded in copying (digitizing) the full human genome. But, to actually do something with that requires an even greater tasks of understanding what all those genes in it actually mean (what the proteins they express do). For the brain we haven't done the former, let alone the latter. I believe there's more point in trying to understand the working of the brain or mind on a more abstract level.


I agree with you that an more abstract understanding would be more useful than a naive copy, however I think calling it "pointless" is a bit of an exaggeration. Imagine being able to make an indistinguishable 1:1 copy of your own brain at the exact moment at which you die, voilĂ  immortality. Tad Williams wrote an interesting bit of fiction about exactly this called the Otherland series - well worth a read.

Not to mention that simply building such a copy would yield its own set of insights.


I'm afraid 1:1 brain copies will stay in the realm of science fiction, so it's not an exaggeration. This paper provides some more food for thought on my position: http://www.mitpressjournals.org/doi/pdf/10.1162/coli.2009.35...


Well, here we have an interesting demonstration of a quirk in the human mind. I guess you could call it a misattribution of word memory.

Your description of "FOXPRO2" interested me enough that I searched wikipedia for it after reading your comment, then found out FoxPro2 is a programming language and database management system, ha, a fitting misattribution for this crowd.

Then using google's implementation of a collective phrase memory(search suggestions) I determined you likely meant - http://en.wikipedia.org/wiki/FOXP2


D'oh!

You're right, of course.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: