While I can't speak to the accuracy of the book criticized, science in general is riddled with similar problems. Follow the citation trail and you'll often find that a cited article doesn't say what was claimed or says something similar but not quite the same. Alternatively, you might see that the cited article does say what is claimed, but the evidence is weak.
When researchers talk about all of the "low hanging fruit" being taken, it seems to me that they're blind to all the nonsense that appears once you start following the citation trail. Maybe every topic has been touched, but even something that seems definitive in a review article could have major flaws when examined more closely.
I'm almost done a PhD in engineering, and this has been my experience at least. I try to "debunk" something in roughly half my publications now.
Edit: I don't mean to suggest that identifying many of these problems is easy, just that it's not done frequently enough. For example, if you're doing research in a particular field, you're probably basing it partly on previous review articles. Take a look at some primary sources in addition to that. This applies extra if you're writing a review article. Don't just mirror what some previous review articles say and cite some newer papers. Find some old but good papers that were missed by previous reviews. Check primary sources. Etc. This is the job of a someone writing a review in my view.
It's also quite jarring to come across a paper on a niche topic (e.g a specific hormone's effects on certain biological processes), that completely botches simple, fundamental and accepted facts in your field; facts that the main thesis relies on.
It's absolutely silly and I find myself finding little of use in the author's conclusions, or their observations. After spending way too much time parsing through endless research papers, the only things I pay attention anymore are methodology and data.
This tells me: is the data relevant to my work? And was the data collected "properly" (I swear, half the time the researchers half-ass methodology that the results are fairly worthless)
> the only things I pay attention anymore are methodology and data.
> This tells me: is the data relevant to my work? And was the data collected "properly" (I swear, half the time the researchers half-ass methodology that the results are fairly worthless)
I've come to the same conclusion, and I'm a theorist myself. Theory is hit or miss, mostly miss. I still read it just to find the nugget of truth if there is one. If the data is relevant and was collected properly then it can be a goldmine, regardless of the theoretical explanation given by the researchers. But often the researchers measure the wrong thing, or measure the right thing in the wrong conditions or with some other problem. (One major problem I've found is that the uncertainties on many measurements in my field are enormous, but don't need to be, and very few seems to have noticed or cared.)
When researchers talk about all of the "low hanging fruit" being taken, it seems to me that they're blind to all the nonsense that appears once you start following the citation trail. Maybe every topic has been touched, but even something that seems definitive in a review article could have major flaws when examined more closely.
I'm almost done a PhD in engineering, and this has been my experience at least. I try to "debunk" something in roughly half my publications now.
Edit: I don't mean to suggest that identifying many of these problems is easy, just that it's not done frequently enough. For example, if you're doing research in a particular field, you're probably basing it partly on previous review articles. Take a look at some primary sources in addition to that. This applies extra if you're writing a review article. Don't just mirror what some previous review articles say and cite some newer papers. Find some old but good papers that were missed by previous reviews. Check primary sources. Etc. This is the job of a someone writing a review in my view.