Kurzweil can identify the basic components that make up a working human brain, and concludes that if you put together a similar assemblage of components, what you'll get is going to be a working human brain.
The problem is that we're not sure we have all of the right components, we're not sure of how those components need to interact, we're not sure of what subtle patterns have arisen through evolution to head off design traps that we're not even aware of.
In short, yes we may be confident that we are described by physical processes, and intelligence is an emergent phenomena. But it is by no means guaranteed that when we put our components together that we'll get will be intelligent. Or if it is, then how similar to us it will be.
So far what I've said can be dismissed as abstract hypothesizing (of course so can the bulk of Kurzweil's work), so let me give a concrete example to worry you. Humans have an innate ability at language. If you have deaf twins who never encounter a language that they can understand, they will invent one complete with a consistent grammar. (This experiment has been conducted by accident.)
Other primates have no such innate ability at language. We've managed to teach chimps sign language, but they have been unable to master grammar.
There is a gene, FOXPRO2, that has 2 point mutations between us and chimps. Humans who lack either of those point mutations have extreme grammar problems. There is therefore clear evidence that putting together a primate brain is NOT SUFFICIENT to get language. Whatever it is that FOXPRO2 does differently between us and chimps is necessary for language.
The problem is that we don't know what FOXPRO2 is actually doing differently between us and chimps. We've recently shown that if you put our version of FOXPRO2 into mice, they behave differently. We can catalog the differences but we don't know why that happens.
Now suppose that we wire something artificial together that we think should be similar in capacity to a human brain. Given that we don't know what FOXPRO2 actually is doing to our brains, what are the chances that our artificial model manages to capture the necessary tweaks that FOXPRO2 makes in brain function for language function?
My bet is, "A lot lower than Kurzweil would have people believe."
My gut tells me that we'll have more success building an exact 1:1 copy of a brain, but from digital components rather than wetware. The discovery of memristors, and other yet to be discovered building blocks, may lead to just these types of advances.
If consciousness still doesn't emerge in such a 1:1 copy then that would be spooky.
 - http://www.frost.com/prod/servlet/cpo/205104793.htm
In a similar vein, we have succeeded in copying (digitizing) the full human genome. But, to actually do something with that requires an even greater tasks of understanding what all those genes in it actually mean (what the proteins they express do). For the brain we haven't done the former, let alone the latter. I believe there's more point in trying to understand the working of the brain or mind on a more abstract level.
Not to mention that simply building such a copy would yield its own set of insights.
Your description of "FOXPRO2" interested me enough that I searched wikipedia for it after reading your comment,
then found out FoxPro2 is a programming language and database management system, ha, a fitting misattribution for this crowd.
Then using google's implementation of a collective phrase memory(search suggestions) I determined you likely meant -
You're right, of course.