Hacker News new | past | comments | ask | show | jobs | submit login

Among the 435 members of the House, for example, there are one physicist, one chemist, one microbiologist, six engineers and nearly two dozen representatives with medical training....This showing is sparse even with the inclusion of the doctors, but it shouldn’t be too surprising.

The article clearly considers engineers and doctors to be scientists. As it should - they are all well-versed in the scientific method by education and should know how to apply it.




My wife is a medical doctor doing a Phd right now. By her own admission, doctors are _not_ well-versed in the scientific method _by education_. The others in her lab are all biologists, and she spent the first year playing catch-up in terms of being able to do science, and says she still feels like she'll always be behind compared to the biologists in the lab who got a regular scientist training in their 5 years at the University. She's certainly being pessimistic though, her lab is headed by a medical doctor...

Anyways, my point is, doctors are not trained to be scientists.

Disclaimer: This is Hungary. I have four doctors in my close family. I'm a physicist.


Well-versed may be a very relative term. I've been exposed to physics, engineering and computer science. The physicists had the scientific method drilled in deep. The engineers a bit. The computer scientists had vaguely heard about it. We still call them computer scientists. Your wife may suffer from a similar "relative difference". (Coincidentally, I could repeat your story with my wife, who's a computer scientist working for doctors who work with biologists in a lab.)

Does a law degree or an MBA cover the "characterize, hypothesize, predict, experiment" cycle?


One thing that comes up again and again is understanding significances, errors and such. This stuff is second nature to a good (and honest) physicist coming out of University, even before he becomes a "pro".

In my experience, to a doctor, even a good one, _at first_ it's just an annoying thing they have to do to get published and they don't really understand the significance of significances. (LOL.)

I would argue that this type of sensitivity to understanding data should be important if you're running a country. You can't even say that there are underlings for this, because this data is so important, and the possibility of underlings skewing it for whatever reason so sizable, that I certainly would want to have some understanding and feeling for the data before doing a press conference about it or using it to guide my lawmaking.


In my experience, to a doctor, even a good one, _at first_ it's just an annoying thing they have to do to get published and they don't really understand the significance of significances.

There is an entire medical discipline that knowingly uses shitty statistics[0]. Then there are the ones who unknowingly use shitty statistics[1]. Absent someone with a qualification that is impossible to get without learning real statistics as a co-author the default when reading medical research should be that the result is either overblown or wrong.

Biologists aren't much better than medical doctors, to my knowledge.

[0]http://lesswrong.com/lw/72f/why_epidemiology_will_not_correc...

[1]http://www.theatlantic.com/magazine/archive/2010/11/lies-dam...


Just recently Science published, and NASA had a PR hoopla about a bogus paper about arsenic life, after it went through the usual Science peer "review" process. The paper was dicredited by numerous Comment papers for outrageously bad statistics and bad methodology, which Science actually published. For reasons I don't really understand, Science did not redact the paper and called the ensuing outrage normal scientific discourse, even though I think they would have benefited from a mae culpa moment regarding their peer review.


You have my vote.

Lots of policy decisions are made based on statistics. Yet they are easy to fool someone with if he or she does not understand them properly enough.

In my experience, to a doctor, even a good one, _at first_ it's just an annoying thing they have to do to get published and they don't really understand the significance of significances.

Slightly disappointing. I'd have expected doctors to have an understanding of placebo effects, and the field of medicine is one where you regularly hear (even outside medicine) about studies being discredited afterwards for wrong/improper controls, positive results being unreproducible etc.

I don't remember the last time I heard a comp sci paper being withdrawn for bad statistics. (They tend to have none, even when they should)


I'm not saying doctors doing research never learn this stuff, just that they come out of University without this knowledge. (I certainly don't know enough about medical research literature to make a statement like this.)

Curiously, at one biomed lab (can't remember if it's the wife's) they have a dedicated person (shared by several labs) to do the statistics for the presentations and the papers. Kind of weird if you ask me, you'd think understanding whether your results are meaningful is not something you calculate at the end!


Any time some psychologist or economist does a test for Base Rate Neglect or other bias on doctors, they perform horribly.


Also important, and related, is my wife claims it took a year or so of fellowship training before she learned how to ask a scientific question. She's an MD who did a research fellowship at NIH; I'm a physicist.


CS is a tricky term, in that it is actually (mostly applied) math, and math ain't science...


Sorry to go off on a tangent, but this has always bugged me.

I wish there were better standardization on CS degrees. Depending on the school, CS can be part of the math department, science department, engineering department, or a liberal arts department.

My school grouped it in science/engineering and required a bunch of math, and quite a few science classes. I've met people with the same degree from other schools who think it's crazy I took so much science and are surprised I had so few liberal arts classes.

It'd be nice if everybody were on the same page.


For me, it was an engineering track (the actual degree is in Computing Engineering), and we did have basic science courses as well. The 'real' CS stuff though, was mostly math and logic (algorithms, language theory, semantics, complexity theory and so on), but there were more engineering-like courses as well (operating systems, networking, computer architectures, etc).


I tend to agree, math is closely related to science, but it isn't science. A theorem and proof is different from a hypothesis and test (a hypothesis and test is generally what I would consider to be the "scientific method").

But "scientist" has a folk meaning. If you ask "is math science?", I'd say no. But if you ask "how many scientists are in congress?" and count someone with a math PhD as a "scientist", I'd have no problem with it.



My wife is a medical doctor doing a Phd right now. By her own admission, doctors are _not_ well-versed in the scientific method _by education_.

She is wise. A good website about current medical practice and whether or not it is based on science, with a lot of guest authors with varying specialties, is Science-Based Medicine.

http://www.sciencebasedmedicine.org/

Some doctors are more scientific than others. The practice of medicine will advance as it becomes more scientific.


The process of diagnosing a disease works quite like scientific method. You form a hypothesis based on symptoms, and then you test your hypothesis by ordering various kinds of lab etc. tests. You inspect the results and modify your theory on what could be wrong.


The procedure you described just there is susceptible to confirmation bias. You need to take that into account, as well as the power of your tests as Bayesian evidence updates. There are a lot of doctors that order tests even though their evidential value is not high.

For example, consider a test with a 1% false positive rate, for a disease with a 0.1% incidence rate for a particular set of symptoms. If you come through positive for the test, what's the probability that you have the disease? Too many people think it's 99% (the converse of the false-positive rate); but in reality it's more like 10%. But you still formed a hypothesis, did a test, and the result came out OK. You think you're doing science, but in reality you're cargo-culting.

Double-blind trials and high-quality statistics are prerequisites for good work here.


Most doctors won't bother with ordering tests for disease with 0.1% incidence rate (or much higher) until more common and probable causes are excluded. And even then, they would most likely defer you to someone who specializes in the very narrow field who could potentially make correct diagnosis with the existing data.

Double blind trials are for clinical studies. You don't do double blind trials on single patient :D.


One anecdote was related on Econ Talk, Russ Roberts and Arnold Kling: http://www.econtalk.org/archives/2007/11/arnold_kling_on.htm... - from the summary, you can listen to the podcast for more:

"Bayes Theorem, 40 year old man with microscopic blood in urine, incidence of serious illness for that symptom at that age in a non-smoker is low, so more sensible to not have the extra test. Doctor didn't understand Bayes Theorem, was deeply offended and yelled at Arnold's wife"

I chose my example to explain Bayes and the base rate fallacy nice and clearly. An example chosen on didactic grounds is easy to attack on realism grounds. But it really does happen. Honest!

My point is, if you're not accounting for bias - an easy mistake to make - you are not doing science. And thus, a doctor's diagnosis is frequently not like science, because it doesn't have good quality objective procedures to exclude it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: