The essence of this sentence is not in the individual letters.
Anyway, saying "It's just atoms" is also misleading, because one also must content with the laws of physics which drive the atoms. Those laws are not yet thoroughly understood, and even with our significant current understanding we still have trouble programmatically predicting protein folding, to say nothing of the staggering complexity contained in the brain.
A few things immediately come to mind though:
A sentence is just letters, but that doesn't mean that thorough examination of each letter provides you with the knowledge necessary to understand the sentence
I see what you're saying here, but consider this example:
Give someone a box of clock components and a clock from which to draw inspiration, and without any understanding of how a clock works or how the cogs and springs are manufactured etc, they will, given enough perseverance build a working clock.
This simple analogy illustrates that understanding exactly the functioning of each sub-component of a system is not necessary to be able to exploit its usefulness.
Also, I don't know that your thought experiment holds water. I can certainly conceive of a universe in which a person never makes the logical leap from holding a clock and parts—or even having a thorough understanding of the workings of the subcomponents of a clock—to building their own. The pre-Columbian New World civilizations, for example, had all the resources to build wheeled vehicles, and certainly understood the principle behind them enough to build wheeled toys or use rolling logs to transport large objects, and the Inca Empire even had a sprawling complex of roads—but never in their long history did the notion of a wheeled cart occur to any of them. Which is to say: the search space of ideas is vast, and one can't reasonably be expected to exhaust them all even with help.
There are lots of examples where the underlying workings are not understood, and yet useful work is done. Look at medicine for example. For thousands of years people knew that if you mix herb A and B and boil them for time X you get something that fights infection or helps with headache. The underlying biochemistry doesn't need to be known or understood to be able to follow the steps to get the desired outcome.
The same might hold true for the brain. If we can catalogue all the connections and information pathways (chemical, electrical), we may not need to be able to explain how every combination of subcomponents interact, but, we may know that a particular arrangement gives the outcome we're after.
PS: "the search space of ideas is vast, and one can't reasonably be expected to exhaust them all even with help."
That's true but in this example we're not searching for anything. Over millions of years one solution to intelligence, from an infinity of other possibilities has already evolved. Each of us already has a working version of what we want to replicate in our own skull. Now we need to tease out all of the cogs and springs and how they are arranged, arrange them in the same ways, and unless there truly is a metaphysical component we will have something indistinguishable from human intelligence/consciousness.
Kurzweil can identify the basic components that make up a working human brain, and concludes that if you put together a similar assemblage of components, what you'll get is going to be a working human brain.
The problem is that we're not sure we have all of the right components, we're not sure of how those components need to interact, we're not sure of what subtle patterns have arisen through evolution to head off design traps that we're not even aware of.
In short, yes we may be confident that we are described by physical processes, and intelligence is an emergent phenomena. But it is by no means guaranteed that when we put our components together that we'll get will be intelligent. Or if it is, then how similar to us it will be.
So far what I've said can be dismissed as abstract hypothesizing (of course so can the bulk of Kurzweil's work), so let me give a concrete example to worry you. Humans have an innate ability at language. If you have deaf twins who never encounter a language that they can understand, they will invent one complete with a consistent grammar. (This experiment has been conducted by accident.)
Other primates have no such innate ability at language. We've managed to teach chimps sign language, but they have been unable to master grammar.
There is a gene, FOXPRO2, that has 2 point mutations between us and chimps. Humans who lack either of those point mutations have extreme grammar problems. There is therefore clear evidence that putting together a primate brain is NOT SUFFICIENT to get language. Whatever it is that FOXPRO2 does differently between us and chimps is necessary for language.
The problem is that we don't know what FOXPRO2 is actually doing differently between us and chimps. We've recently shown that if you put our version of FOXPRO2 into mice, they behave differently. We can catalog the differences but we don't know why that happens.
Now suppose that we wire something artificial together that we think should be similar in capacity to a human brain. Given that we don't know what FOXPRO2 actually is doing to our brains, what are the chances that our artificial model manages to capture the necessary tweaks that FOXPRO2 makes in brain function for language function?
My bet is, "A lot lower than Kurzweil would have people believe."
My gut tells me that we'll have more success building an exact 1:1 copy of a brain, but from digital components rather than wetware. The discovery of memristors, and other yet to be discovered building blocks, may lead to just these types of advances.
If consciousness still doesn't emerge in such a 1:1 copy then that would be spooky.
 - http://www.frost.com/prod/servlet/cpo/205104793.htm
In a similar vein, we have succeeded in copying (digitizing) the full human genome. But, to actually do something with that requires an even greater tasks of understanding what all those genes in it actually mean (what the proteins they express do). For the brain we haven't done the former, let alone the latter. I believe there's more point in trying to understand the working of the brain or mind on a more abstract level.
Not to mention that simply building such a copy would yield its own set of insights.
Your description of "FOXPRO2" interested me enough that I searched wikipedia for it after reading your comment,
then found out FoxPro2 is a programming language and database management system, ha, a fitting misattribution for this crowd.
Then using google's implementation of a collective phrase memory(search suggestions) I determined you likely meant -
You're right, of course.
Because no such explanation can exist, or because we don't have a sufficient understanding yet? See "God of the gaps".
Also have a look at http://en.wikipedia.org/wiki/Qualia, it's basically what I'm talking about.
Do you mean physical or emotional pain? I imagine that physical pain is very well understood and can be explained in (maybe) all of its entirety as a purely material (as in atoms) phenomenon.
I personally don't see any reason why emotional pain can't be explained in the same way i.e. without requiring a metaphysical component.
a) don't exhibit emotional pain or very little compared to "normal" (sociopaths/psychopaths)
b) are autistic and have "...difficulty with “subtle emotions like shame, pride, things that are much more socially oriented” 
c) people with Congenital insensitivity to (physical) pain, some of whom experience the condition due to excess production of endorphins in the brain
You seem to suggest that the question "but why does it hurt" is somehow mystical or metaphysical. I don't think it is. It hurts because your brain is wired so that it does. If it is wired otherwise (as in some people it is) then it doesn't hurt.
Because organisms that evolved to avoid that thing we call pain survived long enough to pass on their genes, whereas those that didn't got burned up, crushed or in other ways extinguished?
In other words an aversion to that thing called pain gave an evolutionary leg up. And it seems that this aversion is fundamental to life since every creature has a tendency toward self preservation.
There are many examples of people who enjoy physical pain and perform cutting even down to bone, and sometimes including amputation. Clearly this is an example of brain wiring which is not beneficial to the individual and, in extreme cases will self-filter out of the gene pool.
We can describe the entirety of an operating system down to its individual ones and zeroes. Do you posit that some irreducible "Tuxness" state exists that computer scientists just refuse to acknowledge?
We could construct robots, that react to pain exactly like humans (screaming, sweating, ...) without feeling anything. Why aren't we like that?
If you posit "why anything exists instead of nothing", then I will have to ask whether "why" even makes sense in this context.
From what we now know of quantum physics, "nothingness" itself might be either less likely than having anything or perhaps nothing but a fiction born of our mental heuristics.
Once we dive deep enough into the functioning of the brain, and its heuristics in particular, the very idea of "essences" looks more and more like mental JPEG compression artifacts and less and less like pyramids on Mars.