That's nice in theory, but does not happen for most published research.
> I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless.
I'm trained in statistics. My ex was an MD. I used to read the NEJM for fun for a couple of years. Most of the results published are barely statistically significant for the small group they tested ("our sample included 40 caucasian females between the ages of 37 and 48, and we have a p value of 0.03" with no mention of the context which might make that p value meaningless - but let's assume they got that part right). And then, a couple of years later, some other study takes that result as absolute truth, but assumes it applies to any woman aged >30. And a couple of years later, it is assumed to be universal and speculated to apply to males as well.
Is your experience different?
> I blame bad science reporting for a lot of the anger you are feeling.
I blame tenure publishing requirements. While bad reporting certainly deserves its share of contempt, people these days do everything in order to meet the publishing requirements for tenure. Most stay away from outright fabrication, but otherwise every manipulation of the data that would make it fit for a higher caliber publication is being done as long as it is not outright fraudulent -- including dropping the background context so nicely exemplified by this xkcd comic http://xkcd.com/882/ . It often is the researchers doing the bad reporting with no outside help.