Something is rotten in the core of science. These days I give the research the same weight I give the anecdote... and that is no weight at all.
Science has been supplanted by money and politics... At least anecdotes admit they're anecdotes!
I'm as critical as anyone (probably more so, check my comment history) of academic biology because of my background in it. There are certainly things wrong it. And due to the nature of biology, replicating results is really hard. It's a fact of life when you deal with systems that are not perfect, not identical and very opaque.
But to say that "Science has been supplanted by money and politics" is stretching the problems of biology into a mountain of conspiracy.
Furthermore, I'm reading your "source" and it reads loudly as "I'm an underfunded big-pharma research who has neither the time nor the resources to properly replicate studies". Did you know that most big pharma labs do not have access to the academic literature? They mostly read abstracts because there is little budget to actually purchase the required papers.
How much do you trust labs that are A) only trying to recreate data so they can make a drug out of it and B) aren't even reading the original data? While academic labs can have grad students toil away on hard experiemnts for literally years before they perfect them...how long do you think Pfizer or Merck or Glaxco-Smith is going to let their paid researchers fiddle away on a project that is probably low priority anyway?
Because, of course, the high-priority projects are the reformulations of penis-enlarging drugs or cholesterol medication...you know, the ones that actually make money.
If you are looking for snake oil and shady research, I dare you to read any research paper that comes out of big pharma labs. We would routinely read them just for laughs because they are (often) downright terrible.
To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment. Many of the smaller biotech might have poor access to journals, but even then, if you could justify the cost, you could get it.
Second of all, yes I trust labs that are trying to recreate data to make a drug out of it. You have to remember that these attempts to recreate data were a very important data point on a potential multi-million (billion?) dollar investment in a new target, these are NOT low priority projects. They WANT the data to be true. They have zero incentive for the data to not be reproducible.
Having worked in both academic and commercial labs, I would say the incentive to "tweak" results in much great in academic labs for the following reasons:
1) Often results are never double checked in an academic lab unless the work is use in a later project. Contrast this with a pharma lab where if the data is positive, you'll have to prove it again and again.
2) Academics (both profs and students) live and die by papers, not so in academic (in fact, in the company I worked in, they preferred if you didn't publish)
3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.
>To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment.
I'll admit that my knowledge of big pharma journal access is colored by those in big pharma that I've talked to (anecdotal evidence, oh the irony). Perhaps they just had poor departments or bad access, I don't know.
However, every university that I've been at has instant access to journals. I never had to wait hours for a paper...we had free reign of just about every journal. Even at my relatively small and poor undergraduate institute.
>1) Often results are never double checked in an academic lab unless the work is use in a later project.
99% of projects in academia are building off some previous grad student or post-doc's work. Sure, there are projects which are nearly impossible to replicate (I should know, I spent 1.5 years of my life trying to replicate a previous grad's project). But it's equally laughable to say that data is never double-checked - professor's career is a long string of projects building on previous projects.
>2) Academics (both profs and students) live and die by papers, not so in [industry]
I'll concede that there is often pressure to publish positive results in an academic setting. However, as you rightly mentioned, academics live and die by their papers. It just takes one lab refuting your paper to have a burned career. While I agree that many academics prefer to just ignore papers they can't recreate, there is still a lot riding on publishing replicable data.
>3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.
This is a pretty baseless statement? I know plenty of techs working at big pharma that just graduated with an undergrad degree and have zero of wet-bench experience (just like I know of plenty who did the same in academia). Conversely, I can't even count the number of post-docs and senior scientists that work at various universities, with literally centuries of experience between them.
1. The big pharma guys have instant access to journals. When I say we had to wait a couple hours, it was because I was looking for a paper from "The Russian Journal of Chemistry" from 1912. We had a vendor who could track down anything. For any of the big journals, we had the same access as academia.
2. We agree on this point. If a lab experiment is used in a later project, it HAS to work or else the future work can't occur. However, lots of projects have "arms", where the experiment is an interesting observation that is never pursued. These are often "one-off" experiments that are published, but never repeated in the same lab.
3. I am by no means painting academics with a broad brush here. I think most academic research is done on the up-and-up and the results are valid, if not hard to replicate (this is research!). I think one issue is the one pointed out in the parent comment. You run 5 reactions, two fail and the three that work produce yields of 50%, 70% and 80%. What gets published? 80%. The devil is in the details. In big pharma, you are trying to make a drug and the science better work or else you can't bring it to market. Much higher standards for reproducibility.
4. I guess my thought here is based on the fact that big pharma typically hires from academic labs. All those post-docs and senior scientists with years of experience? That's who big pharma hires. So overall, I would imagine that the level of experience in big pharma is greater than the average you would see in academia (which makes sense since academia is training for working in places like big pharma).
Once again, I always shy away from descriptions that put all "big pharma" or "academic" researchers into one pile. There are brilliant people on both sides and crappy people on both sides.
Thanks for the useful counter-points...I'm now armed with some more anecdotes (hah!) on the other end of the "big pharma" spectrum.
They certainly had access to the original data. To quote:
> To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator.
There are quite a few other studies which raise similar questions statistics about medical research; e.g.:
Did you read the study the parent post is talking about? A well funded laboratory, that was trying to not "just believe" research (as everyone else apparently does) was trying to replicate these results. If the science was good, all it should have taken is time and money (both of which they had enough of). And yet, 47 out of 53 celebrated results published in peer reviewed papers of the highest caliber could not be replicated . Let that sink for a minute before you reply.
> there is _no_ reason to say that all research cannot be trusted.
Ok. Your reason to state that research can be trusted is that it is eventually replicated (thus confirmed), or thrown out (thus shown false), is that right? (You didn't state that as your reason, so perhaps you have other ideas -- but that's a common one, so I'll reply to it).
Assuming that's the case -- do you have any idea what percentage of results are replicated? And how much time after official publication?
Because if it takes e.g. 30 years until a bad publication is discredited, and (as the data point given by the parent shows) there are areas in which 90% of the data apparently can be discredited when you try to replicate it -- then, there actually might be reason to distrust research in general, because at any given point in time, more than 90% of non-discredited published results are wrong.
See also http://saveyourself.ca/articles/ioannidis.php (and the paper it references). This situation is not science fiction. 90% un-replicatable publications is probably limited to very few subjects. But 50% overall in medicine and biology is totally believable.
Which is not to say science (the abstract idea / discipline / method) is wrong - it's right. It's just that the things we human practice and often call "science" is very, very far from the ideal of science. Ignore that at your own peril.
I would argue that if even those research papers could not be replicated, an anecdote is all but worthless.
Statistics are themselves misleading - there are whole books on the subject (oh no! an anecdote! better close your mind now). They are highly contextual, but the popular press excels are stripping that context and proclaiming absurd extremes. Anecdotes are excellent context, putting statistics into perspective.
Another idiotic strawman argument.
If science and anecdotes are equally bull to you, how do you make up your mind about things? Magic?
It's not science that is the problem. It's that biology considers a 95% confidence sufficient. Considering how many studies are done each year, this virtually guarantees incorrect results.
The reason they do that is that it's impossible to get better results, they just can not do enough trials. So they are stuck.
a) all data, everywhere in the world, including negative results, was published regardless of funding/publication.
b) someone actually looked at that data, normalized it, and used it to assess the real significance of every result, in a sane manner (e.g. by using a bayesian inference with some reasonably behaving universal prior).
Neither a, b will ever happen, and both are essential.
(note: publication of all data is not a sufficient requirement: if 20 independent labs each do the same random experiment, one of them is expected to have a 95% confidence, and when they publish all their data, it consists of that one experiment that seems legit. This _will_ and _already does_ happen by chance)
Let's take anything involving nutrition. Some challenges are: (1) people lie, (2) such studies can't be double-blind so placebo kicks in, (3) the statistical significance of short-term studies is zero, (4) you can't control all the variables, unless you lock those people in a cage and (5) most conclusions of such studies have the potential to confuse the cause and the effect.
But not all of science is like that. Just medicine.
Also what does "the statistical significance of short-term studies is zero" mean? I don't think it means what you think it means.
I would argue that short-term studies (for nutrition anyway) have little clinical significance, despite their statistical significance. I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless. This is the balance you have to strike with large sample sizes. With a large enough sample, small differences are likely to be statistically significant but the key is determining if the difference is worthwhile.
I blame bad science reporting for a lot of the anger you are feeling. Reporters don't seem to understand what they are reporting, and often the scientists themselves are (accidentally or on purpose) making it worse.
That's nice in theory, but does not happen for most published research.
> I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless.
I'm trained in statistics. My ex was an MD. I used to read the NEJM for fun for a couple of years. Most of the results published are barely statistically significant for the small group they tested ("our sample included 40 caucasian females between the ages of 37 and 48, and we have a p value of 0.03" with no mention of the context which might make that p value meaningless - but let's assume they got that part right). And then, a couple of years later, some other study takes that result as absolute truth, but assumes it applies to any woman aged >30. And a couple of years later, it is assumed to be universal and speculated to apply to males as well.
Is your experience different?
> I blame bad science reporting for a lot of the anger you are feeling.
I blame tenure publishing requirements. While bad reporting certainly deserves its share of contempt, people these days do everything in order to meet the publishing requirements for tenure. Most stay away from outright fabrication, but otherwise every manipulation of the data that would make it fit for a higher caliber publication is being done as long as it is not outright fraudulent -- including dropping the background context so nicely exemplified by this xkcd comic http://xkcd.com/882/ . It often is the researchers doing the bad reporting with no outside help.
Or maybe you can trust those experiments that do get replicated successfully?
Also, I would like to propose a logarithmic scale for weighting such things. Say, if the article in question found something extraordinarily significant with 100 out of 100 samples resulting in A, then it's still not rational to weigh a contrarian viewpoint resulting in B with 1/101 - it should maybe be closer to 1/3 or something.
Consensus culture and worship authority are not desirable in my opinion. Arguments should be weighed on their merits and it's appropriate to explore other viewpoints or explanations even if they turn out to be dead ends most of the time.
Vaccines protect you from the risk of contracting particular diseases, some of which are crippling, lethal, or incurable. Plus, most are extremely effective: once you take your shots, you are effectively immune. That's good.
There is a downside however. Sometimes, vaccines have side effects. Most side effects are quite benign, but if you're unlucky enough, they can be crippling, lethal, or incurable. That's bad.
From a medical point of view, vaccines are a net good (let's leave aside logistic considerations, or the effort required to go to the doctor). When you look at the stats, you stand a much better chance at life and health if you take the shot. Even for relatively minor illnesses like the flu.
Now, let's say someone post a heartbreaking comment about how her 9 year old daughter died of a vaccine shot, with all the gory details about the suffering, how she couldn't participate in her school's festival, the size of the coffin… I'm quite sure there are stories of the kind. Given the sheer amount of readers here, maybe one of you will more or less directly relate to that. My apologies to those who do.
Nevertheless, what makes a good story doesn't necessarily make good evidence. When you know of reliable statistics, and you read a contrarian anecdote, you should shift your belief in the direction of the anecdote by a precise amount, which is almost always tiny. What your brain will actually do behind you back however, is shifting your belief by a significant amount, often crossing the "reasonable doubt" line. That's not rational, but that's what will happen. Nameless statistics feel abstract, remote. An anecdote on the other hand feels concrete, real, close. Worse, you can spend far more time reading about the salient anecdotes than learning about the end results of reliable, but boring, scientific studies.
Another example: you don't win the lottery. Period. You don't know of any close family of friend that ever did. But maybe one of you readers do. Maybe that one could comment and say "Hey, but my cousin did win the lottery!". Would that prove me wrong? Not at all. It's just that when the sample size is huge enough, even the tiniest chance can actualize.
Here is the "good" version: Bayesianism is correct. Those who don't believe me may want to read E. T. Jaynes or Eliezer Yudkowsky (long, and may feel abstract and dull). But countless studies about biases showed that we humans are poor at correctly assessing evidence at our disposal. Some of those studies showed that some failure modes come from anecdotes. Downvoting seems harsh, but it's the best we currently have to combat those failure modes.
Now we don't want to overdo it. I suggest we put a comment citing which reliable statistics contradict the downvoted anecdote. Maybe that'll help avoid groupthink. We may also want to allow people to just say they have anecdotal evidence to the contrary of whatever.
I'm not arguing against anecdotes, but there is an important distinction there
And please don't straw man me. Personal experience is mostly great. Successful entrepreneurs may have better decision making processes, not just more luck. Programming issues should be weighted on, since there are so little reliable studies here, and the field is so young. Etc. I was just talking about the cases when the evidence that contradicts the anecdote is solid and definite.
Studies are necessarily narrow and context-laden, even 'solid and definite' ones. The suggestion to automatically downvote anecdotes is too broad, and should be refuted.
Possibly. Actually I don't know. Anyone knowledgeable should disregard my opinion.
> The programming field is no longer young, we should give up that old excuse.
Right. However, I don't feel like we're anywhere near clearing the chaos around the psychology of programming. I still don't know for instance why so many people cannot understand functional programming, which I personally find simpler than procedural programming in most cases I deal with. Or why technical debt doesn't seem to be taken seriously. Programming is several decades old, but it still feels young to me.
> And no you are not being straw-manned, the argument is. Good to not take things personal here.
Hmm, yes, I was too aggressive here. Sorry.
> Studies are necessarily narrow and context-laden, even 'solid and definite' ones.
Ah, I didn't think of this danger. You're right, we at the very least need safeguards. Like, tying downvotes to reasons why they happen, so we (high karma users, moderators?) may be able to nullify those which turn out to be bogus. But that's complicated.
Or, maybe we could just not downvote, but point out in a reply that this is contrarian anecdotal evidence?
Also, considering that medicine is at the stage of alchemy and that doctors simply have no idea what long-term effects these vaccines have on our immune system, some questions do have to be asked.
Like, isn't it possible that with the prevalence of vaccines, our own capacity for generating antibodies gets affected?
And remember here that an exaggerated response of the immune system may be even worse than a lazier response. Such an exaggerated response may even kill you (e.g. Influenza). So either way, the long-term effects of over-reliance of vaccines may be quite bad.
What the hell are you talking about? There is probably no single more life-saving intervention in medicine than vaccines. It is true that a small number of people have a bad reaction to them, but more people have a bad reaction to tetanus.
Those who do not vaccinate are risking re-emergence of preventable epidemics: http://www.sciencebasedmedicine.org/index.php/whooping-cough...
And the ratio is much worse for actual vaccinations. You don't want to see what not vaccinating kids against polio results in...
But those are hardly data and the kids who got the live version (due to a fuck up) are hardly better of.
Not exactly, it's not a virus in latent form, it's either a killed virus, a piece of a virus, or a different virus that is weak, but provokes the same reaction as the more important one.
(Do you know what latent means? It means that it shows up later, which vaccines do not do.)
> So yeah, personally I never take a vaccine that hasn't been in circulation for some time.
Yah, me too, but let's not overreact with nonsense.
> Like, isn't it possible that with the prevalence of vaccines, our own capacity for generating antibodies gets affected?
No, it's not possible. That's completely ridiculous. Do you know anything about vaccines at all? Seriously, that really makes no sense whatsoever. A vaccine does not do anything at all to our capacity to generate antibodies. All it does is take the exact same virus you would get if you got sick, and expose you to it in advance, that's all. It gives you a head start in making antibodies, but does not affect the generation of them in any way.
> And remember here that an exaggerated response of the immune system may be even worse than a lazier response. Such an exaggerated response may even kill you (e.g. Influenza).
And a vaccine creates a muted response, quite the opposite. Compared to a simple cold a vaccine consists of a minuscule number of virus particles. The entire trouble with making a vaccine is trying to get enough of a response, most of the time the body ignores it.
> So either way, the long-term effects of over-reliance of vaccines may be quite bad.
And how do you figure that? I'm not following your logic at all. Unless your logic is that the vaccine somehow changes the bodies response, which it doesn't. So hopefully now that I've cleared that up you will no longer claim this.
Do you know anything about vaccines at all?
Unless your logic is that the vaccine somehow changes
the bodies response, which it doesn't.
The most plausible explanation is the http://en.wikipedia.org/wiki/Hygiene_hypothesis see also http://blogs.scientificamerican.com/disease-prone/2012/02/15...
> And how in the world would you know that?
How could it? If a vaccine could cause such a change so could any illness. A vaccine is just a piece of virus put where your body can notice it. Everything after that is entirely from the body.
For example rabies: Lethal right? But the body can actually clear the rabies virus with no trouble - almost. The trouble is that by the time the body gets rids of the virus it's too late.
So what do you do? You give the body the rabies virus ahead of time, and you do it in a way that prevents the person from actually getting sick. Then next time the body encounters rabies it's ready.
All vaccines work exactly this way: You let the person encounter the illness ahead of time. You make no change whatsoever in the person - all you are doing is making them slightly sick, but in a way that doesn't kill them.
Whatever change the vaccine causes, the illness also does - except the illness also causes damage as the virus replicates.
As for how we should weigh new evidence, this is essentially a solved problem: use Bayes' rule. Suppose that 100 out of 100 studies indicate that smoking is a leading cause of cancer and a contrarian viewpoint ("My grandfather smoked his whole life and lived to 123!") indicates otherwise. Then that anecdotal viewpoint should get approximately 0 weight. Zero. Nip. Zilch. Nada.
We're all experts in a few subjects at best. In those subjects we can easily explore different viewpoints, balance different arguments and keep track of the different schools of thought. We can even confidently diverge from expert consensus if needed. But in most subjects we're enthusiastic laymen at best. I don't think debates and exploration of different viewpoints then lead to much greater understanding. Just look at any forum on the internet (including this one). Debates aplenty and the few knowledgeable people get drowned out in a sea of contrarian musings.
Expert consensus is just the aggregate opinion of those who have the best understanding. So when a layman disagrees with the experts he's almost certainly wrong. What I see is the opposite of consensus culture. I see a willingness to disagree with the experts before understanding the subject material in depth.
Well, see, that's part of the problem. The original sample of 100 was, hopefully, selected at random. Whereas the anecdote was selected by the person telling it because he or or she thought it was apropos. With a large enough population of potential commenters, the chances of someone doing so gets really high.
Opinions overwhelm all other forms of material in a discussion. Anecdotes are actually one up from opinions as they are concrete. You should weight them like so:
Statistical Evidence + Logic > Statistical Evidence > "Common Knowledge/Wisdom" > Anecdote + Logic > Anecdote > Opinion + Logic > Opinion
...I kid. It's actually an interesting idea. I don't know about scrapping them entirely, but I think a lot of sites (say, Reddit) could benefit from moving the anecdotes elsewhere. Partially because of this, and partially because anecdotes in any form tend to derail the discussion pretty darn quick.
The idea that the US Government conducted illegal eavesdropping/wiretapping operations against US Citizens was a "contrarian anecdote" in 2003.
This blog post condescendingly claims that HN readers are not sophisticated enough to balance out the sources that are input to their rational decision making process.
In reality, contrarian opinions sometimes turn out to be correct, and mainstream opinions sometimes turn out to be wrong. Often, the benefit of a contrarian opinion is that it causes people to ask more questions, which is rarely a bad thing.
People are sheeplike enough without having to be encouraged to follow the herd!
I suggest everyone find a few contrarian theories and imagine what it would be like if you rearranged your life as if you expected them to be totally true. Most people are unwilling to go that far, and yes, in that way contrarian stories can weaken rational processes.
As the saying goes: they laughed at Galileo. They laughed at Einstein.
And they laughed at Bozo the Clown.
His notion that in the field of medicine you should disregard contrarian anecdotes because there's statistical evidence is horrible advice. If you actually looked at said statistical evidence, you'd realize it is very rarely strong, and often only relevant to 70% or so of the population (which is great for 2 out of 3, but useless for the third one).
"Best practices" often aren't, and common sense is not at all common in medicine. A significant number of published results are plain wrong (see http://saveyourself.ca/articles/ioannidis.php and the paper it references). A lot of medical advice is wrong, harmful or useless; The archives of seth roberts' blog are an enlightening read.
The problem is that this is impossible for most people.
(1) Use your personal discretion. Some anecdotes are funny. Others can be perfectly cogent rebuttals, especially when people make overbroad statements.[a]
(2) The author is right in some respects: you need to be aware of the ways that stupid stories can bias you. Even being aware of this fact is sometimes not sufficient, so use downvoting to try to protect others.
(3) Lots of people believe in anecdotal responses to anecdotal original-claims. [b]
(4) A lot of people tried to be funny by offering anecdotes to be downvoted. Unfortunately, people haven't consistently taken the advice above, so they are not all in one place (at the bottom). That is a pity -- this would have been a fun and interesting use of downvotes.
[a] Actually, the discussion is full of overbroad statements of this form like "no universal truth" and someone presenting "nothing should be above questioning" as above question.
[b] I would like to formally respond that this is generally stupid -- you don't clean up a house by flinging crap at the crap. Your mileage may vary.
Having said that, this is another really bad article in what seems like an endless series of bad articles on HN. Here are a few of the more obvious flaws:
- There is no universal truth as the author seems to imply. Simply because some publication or source you may like has performed some sort of statistical study doesn't have a lot of meaning on it's own. Yes, 100% of the people who eat bananas are dead within 120 years of their consumption. No, that does not mean bananas are bad. The study or scientific reporting is simply the beginning of a much longer conversation society has over many decades that leads us to higher-fidelity models.
- The purpose of a social site is to behave socially. While places like HN have (or used to have) a lot of different guidelines for the types of behaviors that are encouraged or not, being social means sharing stories, anecdotes. We are not robots.
- The idea that people are unable to sort out personal anecdotes from other forms of information. The follow-up idea that since they are not able to do this, we should prevent ourselves from sharing such stories. This is bad, bad, bad, bad, bad, bad .... bad(n). We are humans. We share stories. Anybody who says "people are so stupid" can justify just about damn near anything as long as they keep emphasizing the stupidity and danger some people's actions represent.
Every now and then, gasp(!), published research is either wrong or doesn't show anything near what the reporter claims. Anecdotes don't help with this, of course, but they serve to remind us that even well-known scientists working at the highest quality standards available are still just sharing with us a very specialized form on anecdote. We did these things in this way and this is what we observed. Here is how you also can observe this. The really "good" part of the story they are sharing is talking about hidden assumptions, population variance, reproducibility, and so forth. Anecdotes don't do this, but they help us brainstorm ways in which we can improve the discussion, take the next experiment to an even better place.
I'm very uncomfortable with the line of reasoning that goes somewhat like this: people are broken in some way, therefore we must somehow control what they read, say, or think for their own good. To me the beauty of western civilization is that really broken people can do these amazing and awesome things. The fact that we're deeply flawed is the magic. Science and human advancement work because of our flaws, not in spite of them. This is a very important thing to understand! Setting up some ideal of perfection, no matter how well-intended, and then mucking around with the way societal interaction works in some effort to improve on things is heading down a very dark path that has a very unhappy ending. This attitude seems rife in the technology community, however, perhaps because we are such analytical people.
I don't want my fellow man to be irrational and distrustful of science and knowledge. But I'll take that any day over silencing contrarian articles and dissent. We've done the math on this: wrong people who share emotional stories and persuade crowds about all sorts of illogical things are a price that a dynamic community pays for progress.
You distinguish between people who are sharing anecdotes and people who are making strawmen arguments in the usual way that you'd distinguish them - by tone, follow-ups, etc. But if your sole contribution to a thread is "me too" (or "actually, not me too"), maybe that contribution isn't particularly valuable in itself.
And I'm comfortable making a "corrective upvote" because I think downvotes should be reserved for obvious spam, completely OT comments and comments that add nothing to the discussion at hand.
The one other thing that strikes me is that, for the sake of argument, I will often convert more dependable facts into anecdotal form to ease understanding. I've found, through trial and error, that just stating the hard facts tends to lead into a circle of explanation, but stating that same information in more relatable terms is, simply put, more relatable.
"There is no universal truth"? Are you sure? Because if that's true then there is no standard to judge whether one model is "higher-fidelity" and in fact there is nothing for science to do at all. Do you really believe that?
Do you really need to give up the idea that anything is actually true in order to dispute this blog post?
I don't understand how you figure that there is a choice between distrusting science and knowledge and silencing dissent. You seem to think that science and knowledge are just some form of political orthodoxy.
Theoretically, there is a "universal truth", but for all intents and purposes, there isn't outside the realm of Math.
We judge science's fidelity by how well it correlates with repeatable experiments - which may be characterized by some "universal truth", but that's besides the point. In Newton's day and age, newtonian mechanics seemed to describe essentially everything. And then it turned out to be a crude approximation that only works in large scales.
In 1900, there was a Physics convention, in which the tone was basically: We have everything worked out, except for 3 minor things - Michelson Morley light aberration (solving this required developing the theory of relativity), Black body radiation (solving this required developing quantum theory), and the Photoelectric effect (which also requires quantum theory to explain properly).
> Do you really need to give up the idea that anything is actually true in order to dispute this blog post?
No. But you do need to give up the idea that you have certainty of knowledge about how true things are.
> You seem to think that science and knowledge are just some form of political orthodoxy.
In math, they aren't. In physics, they aren't.
In biology, it's not so clear.
In medicine, and nutrition, there's a ridiculous amount of political orthodoxy and "religious" beliefs -- and last I heard, they were considered sciences.
There is a huge difference between saying "something is true, but I don't know what (yet)" and "there is no such thing as truth"; between "a lot of people try to commandeer medicine to sell things" and "there is no actual truth of anything to discover in the field of medicine".
I was not giving you any personal advice. I was taking your "you" as a general statement to the reader, and replying with the same language pattern (e.g. if I said "you can bring a horse to water", I would actually mean "one can bring a horse to water".)
> There is a huge difference between saying "something is true, but I don't know what (yet)" and "there is no such thing as truth"; between "a lot of people try to commandeer medicine to sell things" and "there is no actual truth of anything to discover in the field of medicine".
Indeed, there is a huge difference, I don't think anyone is disputing that.
What some people (me included) are disputing is that what is considered "the state of the art" in the many sciences (other than math and physics), is actually not the result of rigorous scientific study that it is assumed to be, and that therefore well reasoned and supported contrarian explanations, data and opinions should be welcome (they aren't; there's active suppression).
Yes, most criticism is useless, but ...
No, most research is NOT as sound as the researchers themselves believe.
That's true, but neither did you (or anyone else ever, for that matter) provide support for the idea that MOST scientists do understand statistics. See how easy it is to discard anything you disagree with?
> I am much more prepared to believe that reporters don't understand statistics than scientists.
That's fine, but (a) it doesn't say anything about how bad scientists are with statistics (only that they are slightly better than reporters, which I tend to agree with), and (b) this is an argument from bias/faith/religion/prejudice, not from science or data. You are just as guilty as anyone you criticize. You might be more right or less right, but you* don't have the moral ground. (* general you).
> Still, you have provided an anecdote in support of broad sweeping statements.
What was that statement of yours about learned people digging into science? So now it is not enough for those people to know what they are talking about, they have to do it in a format you approve of.
I can provide tens more valid criticisms. I charge $200-$1000/hour for my line of work, and I'd be happy to take as much to work for you finding them, when I have some free time.
But I'll throw in a freebie: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ - though I suspect it will stop at most people's "no true scotsman/scientist" filter...
Indeed not, but to use the coined term, some things are truthier than others. By and large, we can make more informed and better decisions on the "truthiest" truths without mucking up the argument with individual, pointless anecdotes, especially when those anecdotes aren't backed by anything but what simply "happened" to them.
More often than not, it's a case where some other article summarizes (most likely in an incorrect way) another study or someone on HN mentions a study, etc.
- He is only arguing that there is universal truth in proxy by arguing that the scientific method is more valuable than anecdotal evidence. If you don't agree with that, you probably disagree with most of HN (pure conjecture, does anyone else here think anecdotal evidence is more valuable than statistical analysis?)
- I agree with you on this point. I don't think we should downvote comments because a fun discussion is what the comments are for, not to try to prove or disprove a study.
- Statistical analysis is not a specialized form of anecdote. That's a stretch.
The strawman here is in equating (published) statistical analysis with the scientific method. Of course the scientific method is more valuable, but that's not necessarily relevant.
Please have a look at http://xkcd.com/882/ if you haven't already - what this comic describes is a very valid statistical analysis, according to the "scientific method", (only neglecting base rates like 99% of published papers do).
This is (unfortunately) very commonly practiced in the life sciences, including medicine -- sometimes knowingly but mostly unknowingly. Bad reporting not required for a horrible, long lasting effect on the future.
As a result, most arguments about science are invalid from a scientific-method point of view. But the claims brought up -- including anecdotes -- are often interesting and informative.
To a certain degree I do. I would go even so far and say that the provider of the contrarian perspective is making a scientific contribution, by pointing out the lacking external validity of the original study. I personally believe that the more interesting phenomenons in science are those incidents, when "things" are acting different than expected. Contrarian anecdotes are most often the best starting places for these phenomenons.
Contrarian anecdotes are important (but they too should be questioned).
Nothing should be above questioning; even prevailing wisdom.
Especially prevailing wisdom, since it is least likely to be questioned normally.
The proposition that the article makes is not that anecdotal stories must be false, but that they might influence the reader more than they should. Thus they should not be encouraged.
The voting system on HN sometimes demotes bad content. Far more often (discussed widely elsewhere) its used to assert agreement or disagreement. When used in this sense, its a sort of instant-poll.
In the case of anecdotal argument, it forms an ad-hoc 'scientific' experiment. The HN population weighs in using their own experience. Those who's memory (or ok, memory-triggered emotion response) aligns with the author may upvote etc.
To 'bad content' is uniformely suppressed instead, this social experiment is lost, and the community loses. The merit of instant-polls is debatable, but other such polls are supported here. In fact the test group on HN may exceed the original 'scientific paper' group by an order of magnitude. Its statistical significance may exceed a graduate student's narrow study.
And thanks to selection bias, its external validity is likely zero. "Our HN poll found that 1 in every 10 people has founded and sold a company."
I see this kind of stuff all the time on Reddit, and it's become so numbing that you know, no matter how well you construct an argument or how many facts you cite, someone will always come out of the wood works and tell you how you're wrong because it didn't happen that way for them. It wouldn't be so bad if people didn't take that as a genuine counter-argument.
And, frankly, if we're talking about public discussion of scientific papers, inappropriate generalization is as big a problem as contrarian anecdotes, if not bigger. Scientific papers often cover highly specific observations that are useful primarily to other researchers, and often even then not for many years. People then try to apply that specific knowledge to practical day to day situations.
But these are just non-arguments when the finding is "In 9 out of 10 cases, ...". The anecdote is the 1 out of 10 case.
Let's try to question anything, and when it is suspicious - by all means, say so! But, I guess what the author is trying to say (beyond the whole Downvote discussion) is that we should try to reason and make a proper argument. Especially one that is not already addressed in the research.
The whole point of the article is that people won't judge that anecdote as only 0.1 relevant, but much much more (I'd say close to 0.9 if they personally prefer what anecdote says to what research says). It's a general human flaw that's very visible in day-to-day interactions with people.
When you start to design a statistical experiment, you have already made an important methodological choice. See
I've noticed that in my own field which is education, there appears to be a fondness for sophisticated statistics, even though no manager ever allocated students to teachers on a double blind random basis. An excellent example is the way the UK Education ministry has decided that 'phonics' is the way to teach reading.
I think that this general tendency might be an example the 'white coat syndrome' in action; belief that using formal statistical techniques might increase the meaningfulness of the results. I suppose that is a form of cargo cult.
This is hacker news, a forum aimed at people with novel business proposals and new software to try out. Should you be trying to find 'the Truth' or should you be building some grounded theory that tells you what to do next, provisionally, now, today?
Now if you're telling me that your sister stuck a magnet in her ear and cured her cancer, I'm not going to give that credence without some real data.
Most things on HN are not that, and a good story often compels us to think. So FWIW, I'm not downvoting contrarians, and I'm not downvoting anecdotes for being anecdotes.
A: "Murrumbidgee River is fun and safe for children to swim in! No one came to any harm there for the last 100 years!"
B: "Dunno... my dog was eaten by a crocodile there last Saturday..."
B's dog is a sample of one, but maybe worth paying attention to.
I think a good point is made: statistical evidence is also misleading - it deliberately ignores (averages out) the extreme cases. The results are a distribution; statistics folds that into one number. Anecdotes fill out the distribution.
Statistical evidence is not misleading; it's simply the case that if you are seeking outliers, as in your example, then looking at measures of central tendency won't contain what you're looking for. Anecdote has no role in "fill[ing] out the distribution."
Alerting people to new circumstances is a good use, but that's pretty much all. Proof by counterexample works well only in mathematical logic, which is not how the real world works. In reality you enter the domain of probability theory, and there counterexamples work exactly as the article author says - as an evidence that needs to be properly weighted. And these are those weights that people vastly overestimate in case of anegdotes.
 - by this I mean, mathematical logic is not how we model, comprehend and operate reality.
First we must determine the validity of the article existing in itself and any motivations or biases in it. For this, contrarian anecdotes are useful. This article seems to miss this point.
Once we can accept the article at face value, then we may hit the biases described.
The herd is rarely correct.
"One death is a tragedy; one million is a statistic."
This topic sort of came up the other day in the thread about the girl losing the iPad software she needs to talk (Silencing Maya).
Some commenters thought the story should be ignored as a data point about the societal value of patents.
I disagree in that case because I don't believe economics or social sciences have anywhere near the amount of rigor and theory to make a good claim on whether the patent system is a net benefit to society. Deciding such a thing is a very old, and classic problem in philosophy.
The OP seems to implicitly relying on a variant of utilitarianism, which IMO is wholefully inadequate to rely on for moral decisions.
Since science is so hard, there is a lot of bad science out there. What gets reported in the wider media has all sorts of weird selection bias, never mind what gets picked for publication in journals.
Anecdotes are human stories and they are deeply connected to why we care. Statiscal tools can be used in ways that justify harm people in the name of greater good.
I agree that scientific medicine provides tools
for "real"'medicine that other methods don't.
I just think we should remember to real peoples
stories, especially if they are true.
Anyone reading this probably gets that one can't exactly counter balance the effect of anecdotes within themselves. But it can still be countered some, I think it's worth it to hear stories (that are relaxant to the topic).
Now what is the basis of my beliefs? Mostly intuition, not rationality. But I don't think it's possible to undermine my ideas with some kind of experimentally bases argument. There's
just way to many variables!
The quote (and variations) are quite famous, it might be apocryphal, but I think it is true. Hearing about hundreds of thousands of people massacred in a foreign land doesn't hit home when you are reading about far away.
That's why reporters (New York Times style)try to weave in illustrations and stories about individuals even when discussing a larger trend.
Most of the conversations on HackerNews aren't about things that have right and wrong answers. As a matter of fact, many of the most popular posts are nothing more than anecdotes themselves. So why exactly should an anecdote in the comments be downvoted?
E.g., I've been prescribed medication and ended up with a side effect not listed on the drug literature. The doctor told me that the side-effect must be psychosomatic. Later scientific evidence revealed that about 50% of patients given the drug experienced the same side effect, but that it had been previously underreported. Who's to say that researchers would have even bothered to research the side effects more thoroughly if they hadn't paid attention to the fact that the anecdotal evidence contradicted the scientific evidence.
Or remember when the scientific evidence seemed to indicate that a high carbohydrate, low fat diet was the healthiest choice? Should I have ignored my anecdotal evidence that that diet made me feel like crap?
Science sometimes gets itself into harmful orthodoxies. See Thomas Kuhn for more info on this if you are unfamiliar. One example of this is Behaviorism in Psychology. In this field, it was scientific orthodoxy for many decades that Behaviorism was scientific and Cognitive Psychology was not because Behaviorism was based on only quantifiable, measurable data. It took Chomsky to point out the idiocy of this orthodoxy, thereby breaking the orthodoxy, allowing science to progress.
Re the anecdotal evidence that saved my career, I've read here that there is no scientific evidence that ergonomic keyboards can help prevent or ameleroate RSI. I am 100% sure, however, that the Kinesis Contour keyboard saved my ability to type. All I have to do to know this for sure is listen to my body. Furthermore, I personally know about a dozen programmers who feel similiarly. I'm sure that someone will pipe in that this is almost certainly the placebo effect. If that's the case, then Kinesis makes the world's very best placebo, as placebo-like things have done precious little for me in any other area. In any case, even if it were the placebo effect, what does it hurt anyone to ignore the putative scientific evidence and try out a Kinesis keyboard for themselves to see if it provides them with relief?
The idea that posts should be downvoted for recommending such ergonomic keyboards is insane.
If I don't see any contrarian comments, then I suspect groupthink in discussion.
However, anecdotes can and do (and should) play a large role in influencing how we think. They can humanize a problem and create food for thought in a way that no amount of statistics ever could.
For example, it's one thing to cite statistics that children of gay parents do just as well as those of straight parents (I have no idea if it's true, but it's an interesting contemporary question). But that's not likely to change the mind of a homophobe on the issue of gay adoption. On the flip side, a lucid, heartfelt anecdote from a person who had gay parents might actually help someone to understand what it's like to grow up in that environment an therefore become sympathetic.
Of course, it has no bearing on the actual statistics at all, but sometimes statistics aren't the most important thing.
If one wants to "attack" an anecdote, then a contrarian anecdote is the weapon.
If one wants to attack scientific data, you need contrarian scientific data.
At least, I hope that is right. There for, to mix the two is like attacking a tank with a wooden stick.
Surely, anecdotes are used as the premise of scientific research. Lots of people tell stories. There seems to be something interesting going on. Then you do the research and produce the data. If the data is conclusive, then your past the anecdote. If later contrarian anecdotes appear, and they seem significant, off you go to scientific research again.
I know Im wrong somewhere. But where?
"Contrarian anecdotes like these are particularly common
in medical discussions, even in fairly rational communities like HN. I find this particularly insidious (though the commenters mean no harm), because it can ultimately sway readers from taking advantage of statistically backed evidence for or against medical cures. Most topics aren’t as serious as medicine, but the type of harm done is the same, only on a lesser scale."
The basic problem, as the interesting comments here illustrate, is that human thinking has biases that ratchet discussions in certain directions even if disagreement and debate is vigorous. The general issue of human cognitive biases was well discussed in Keith R. Stanovich's book What Intelligence Tests Miss: The Psychology of Rational Thought.
The author is an experienced cognitive science researcher and author of a previous book How to Think Straight about Psychology. He writes about aspects of human cognition that are not tapped by IQ tests. He is part of the mainstream of psychology in feeling comfortable with calling what is estimated by IQ tests "intelligence," but he disagrees that there are no other important aspects of human cognition. Rather, Stanovich says, there are many aspects of human cognition that can be summed up as "rationality" that explain why high-IQ people (he would say "intelligent people") do stupid things. Stanovich names a new concept, "dysrationalia," and explores the boundaries of that concept at the beginning of his book. His shows a welcome convergence in the point of view of the best writers on IQ testing, as James R. Flynn's recent book What Is Intelligence? supports these conclusions from a different direction with different evidence.
Stanovich develops a theoretical framework, based on the latest cognitive science, and illustrated by diagrams in his book, of the autonomous mind (rapid problem-solving modules with simple procedures evolutionarily developed or developed by practice), the algorithmic mind (roughly what IQ tests probe, characterized by fluid intelligence), and the reflective mind (habits of thinking and tools for rational cognition). He uses this framework to show how cognition tapped by IQ tests ("intelligence") interacts with various cognitive errors to produce dysrationalia. He describes several kinds of dysrationalia in detailed chapters in his book, referring to cases of human thinkers performing as cognitive misers, which is the default for all human beings, and posing many interesting problems that have been used in research to demonstrate cognitive errors.
For many kinds of errors in cognition, as Stanovich points out with multiple citations to peer-reviewed published research, the performance of high-IQ individuals is no better at all than the performance of low-IQ individuals. The default behavior of being a cognitive miser applies to everyone, as it is strongly selected for by evolution. In some cases, an experimenter can prompt a test subject on effective strategies to minimize cognitive errors, and in some of those cases prompted high-IQ individuals perform better than control groups. Stanovich concludes with dismay in a sentence he writes in bold print: "Intelligent people perform better only when you tell them what to do!"
Stanovich gives you the reader the chance to put your own cognition to the test. Many famous cognitive tests that have been presented to thousands of subjects in dozens of studies are included in the book. Read along, and try those cognitive tests on yourself. Stanovich comments that if the many cognitive tasks found in cognitive research were included in the item content of IQ tests, we would change the rank-ordering of many test-takers, and some persons now called intelligent would be called average, while some other people who are now called average would be called highly intelligent.
Stanovich then goes on to discuss the term "mindware" coined by David Perkins and illustrates two kinds of "mindware" problems. Some--most--people have little knowledge of correct reasoning processes, which Stanovich calls having "mindware gaps," and thus make many errors of reasoning. And most people have quite a lot of "contaminated mindware," ideas and beliefs that lead to repeated irrational behavior. High IQ does nothing to protect thinkers from contaminated mindware. Indeed, some forms of contaminated mindware appeal to high-IQ individuals by the complicated structure of the false belief system. He includes information about a survey of a high-IQ society that found widespread belief in false concepts from pseudoscience among the society members.
Near the end of the book, Stanovich revises his diagram of a cognitive model of the relationship between intelligence and rationality, and mentions the problem of serial associative cognition with focal bias, a form of thinking that requires fluid intelligence but that nonetheless is irrational. So there are some errors of cognition that are not helped at all by higher IQ.
In his last chapter, Stanovich raises the question of how different college admission procedures might be if they explicitly favored rationality, rather than IQ proxies such as high SAT scores, and lists some of social costs of widespread irrationality. He mentions some aspects of sound cognition that are learnable, and I encouraged my teenage son to read that section. He also makes the intriguing observation, "It is an interesting open question, for example, whether race and social class differences on measures of rationality would be found to be as large as those displayed on intelligence tests."
Applying these concepts to my observation of Hacker News discussions after 1309 days since joining the community, I notice that indeed most Hacker News participants (I don't claim to be an exception) enter into discussions supposing that their own comments are rational and based on sound evidence and logic. Discussions of medical treatment issues, the main concern of the submitted blog post, are highly emotional (many of us know of sad examples of close relatives who have suffered from long illnesses or who have died young despite heroic treatment) and thus personal anecdotes have strong saliency in such discussions. The process of rationally evaluating medical treatments is the subject on entire group blogs with daily posts
and has huge implications for public policy. Not only is safe and effective medical treatment and prevention a matter of life and death, it is a matter of hundreds of billions of dollars of personal and tax-subsidized spending around the world, so it is important to get right.
Blog post author and submitter here tylerhobbs suggests disregarding an individual contrary anecdote, or a group of contrary anecdotes, as a response to a general statement about effective treatment or risk reduction established by a scientifically valid
study. With that suggestion I must agree. Even medical practitioners themselves do have difficulty sticking to the evidence,
and it doesn't advance the discussion here to bring up a few heart-wrenching personal stories if the weight of the evidence is contrary to the cognitive miser's easy conclusion from such a story.
That said, I see that the submitter here has developed an empirical understanding of what gets us going in a Hacker News discussion. Making a definite statement about what ought to be downvoted works much better in gaining comments and karma than asking an open-ended question about what should be upvoted, and I'm still curious about what kinds of comments most deserve to be upvoted. I'd like to learn from other people's advice on that issue how to promote more rational thinking here and how all of us can learn from one another about evaluating evidence for controversial claims.
The purpose of a site like HN is not to arrive at some kind of imaginary consensus, it's to inform and engage people in meaningful discussion. There is no winning side to be on, and no ultimate arbiter of truth. By downvoting as you do, you rob the site of content in exchange for an illusory sense of victory.
For example, if a close friend goes on and on about how the Ford he bought
was a piece of crap, detailing how the transmission failed at 30k miles
and the rear-view mirror fell off, you’ll be wary about buying a Ford in
the future, even if Consumer Reports rates them highly.
Don't buy Ford trucks. They're too reliable.
You would be better served by this:
The author must be a frequent coffee drinker who didn't like that some people had a different experience with coffee than some other people had, and felt compelled to write that post. It's not the contrarian anecdotes that left me with the sense that the research findings weren't conclusive. It's the fact that a population of 124 people in two cities is not at all representative of the target population which numbers over 40 million according to the census bureau.
Maybe when the study researchers conduct a larger study I'll believe them.
If we downvote anecdotes, we won't have a justification to vote for scumbag politician with a heartfelt story to tell!!
edit: here's a terrific example!!! http://www.washingtonpost.com/local/alabama-law-drives-out-i...
Sorry, I'll go on the experiences of myself and people I trust over research/article spinning.
This article is literally asking for the right to lie (under the guises of 'research') and asking us to mod down anyone who calls them out on it. It really takes some face to say "Ignore what you experience - and vote down the experiences of others - and trust our data instead."
Next you'll sell me the most reliable cloud on the planet. All the responses on the article say they've had nothing but problems and downtime. But, I should just ignore these, right?
One should, when listening to a study, question the funding. Likewise, dissenting opinions must also be examined in what interested parties have a hand in their creation.
You are describing a method for judging arguments without thinking critically about the arguments themselves or examining their basis in evidence and that is not any kind of science. One does not find any measure of objectivity by averaging between opinions, only by holding arguments to the yardstick of rationality and evidence.