Nick Wade's articles have long read as though he has an axe to grind against the Human Genome Project and its progeny (HapMap and GWAS in general). First, 10 years is an awfully short time to go from the development of a scientific tool (the human genome map) to real-world medical treatments. I emphasize tool because the genome, per se, is not really a discovery; it is a framework that helps you make discoveries.
Then there is his failure to understand genetics, or refusal to do so. Take the following sentence: "If each common disease is caused by a host of rare genetic variants, it may not be susceptible to drugs."
Let's examine that assertion by way of example: hypercholesterolemia, a common disease. Its rare familial forms -- and its common forms -- are caused by dozens of different, often rare, mutations in APOB and other genes like PCSK9. If Nick Wade's assertion is true, then hypercholesterolemia would probably be insusceptible to drugs, since presumably we would need dozens of different drugs to target each specific mutation.
Except he's totally wrong. We just put them all on statins, regardless of the causal mutation. And they work like a charm -- demonstrably reducing all-cause mortality.
So the current evidence gives lie to his claims. And this is just scratching at the surface. Nick Wade's article have long made it clear that he believes that rare variants are the only important ones. Nevermind the fact that we know where to look for rare variants thanks to the presence of common ones. And common variants actually can have large effect sizes (PCSK9, anybody?). Etc.
Not only that, it's only really been in the last 3 or 4 years that the sequencing technology has exploded in terms of data volume, as well as modern computer hardware that can analyze that data in any reasonable time frame. That's whats enabled whole genome sequencing of disease patients to become something practical.
When the human genome was published the state of the art sequencing machines produced a remarkably tiny fraction of what we can get now, and the state of art computer on which to crunch the data was a Pentium 3. Just because a human reference genome was published didn't change the fundamentals of how future research would still be conducted.
As a computer scientist working at one of the major genome centers mentioned in the NYT I can attest to ben1040's claim.
In the last five years alone because of technological advances in sequencing technology we have moved from talking about genomic data in megabases (Mb) to gigabases (Gb). Illumina newest HighSeq sequencing technology is capable of 300 Gb per run, 10x more than there competitor ABi's SOLiD instruments which were released as little as 2 years ago!
It seems to sort of fit the pattern of the overoptimistic futurist who's turned overly pessimistic. In the 1990s, he was one of the big popularizers of the narrative: new genetics, though currently in its infancy, will soon change everything. Maybe he now feels burned that it didn't happen quite like he predicted (or at least, not as fast). Here's a 1998 book of essays he edited (many of them his own mid-90s pieces for the NYT) that largely takes that sort of hyper-optimistic view: http://www.amazon.com/Science-Times-Book-Genetics/dp/1558217...
The startling fact that he fails to observe is that genes are only part of the picture. Every gene is expressed in a variety of environments and at each level (DNA, the cell, the tissue, organ, organism, etc.) environmental impacts are significant.
I'm very excited about the potential of 23andme to offer a significant leap forward in this sort of statistical work that can tease out more environmental factors b/c of the elaborate surveys they have created under 23andwe.
Of course, these are not going to all be publishable quality at first, but the important thing is to start to understand (and respect) what is going on.