Hacker News new | past | comments | ask | show | jobs | submit login
What happens when patients find out how good their doctors are? (2004) (newyorker.com)
314 points by adenadel on Dec 4, 2017 | hide | past | favorite | 224 comments



As a physician, I often think about how we lack truely objective assessment of patient outcomes (either in the context of evaluating physician competence or, probably more importantly, assessing and improving upon clinical practises).

I would really appreciate insight on how this could be achieved.

There are several issues which are particularly vexing:

- The distinct lack of verifiable, objective markers of physician competence.

- Each patient's case is unique and cases with the highest levels of difficulty are often treated by the most experienced people. These cases, of course, are likely to have worse outcomes than simple cases which may be treated by less experienced (worse?) physicians.

- Clinical outcomes are largely recorded by the same people treating the patient so reported outcomes are often erroneous or frankly fraudulent.

- This is made worse by the hierarchical nature of clinical medicine and deference to seniority and title.

- Medicine is parochial so clinical practises for the same disorder vary tremendously. You might be treated a dozen different ways for the same disorder and presentation depending on the facility and especially on the specialty that ends up treating you.

- Outcomes are not necessarily determined by clinician ability. There are several other factors at play: the pre- and post-care (such as work-up by ancillary staff or ICU care after a surgery), the cohesiveness of the facility and its efficiencies (or lack thereof), availability and preferences for resources such as medical devices, drugs and hospital equipment which may be largely out of the hands of the physician.


This is exactly the problem we have been working on at the startup I founded, Outcomes.com. As a physician and training surgeon before moving out to the Bay Area, I was amazed that we had very poor visibility and almost no data on how our patients did after treatments beside very crude measures, like whether the patient had major complications or died.

Our focus has been on capturing patient-reported outcomes - that is the outcome of care as experienced by the patient, measured using objective and validated surveys that are often specific to the condition or treatment. There is now a movement among payers and Medicare to incorporate these kind of patient-centered outcome measures in to reimbursement, although change is admittedly slow.

I'd love to talk to people wanting to make an impact in this field, my email address is francis at outcomes.com


Perhaps I'm missing something, but how could patient-reported outcomes possibly be objective? Patients are notoriously bad at assessing their own conditions and establishing causality for changes. We know that patients tend to give positive survey results if the physician was "nice" and if he wrote a prescription, regardless of actual quality of care.


Most patient-reported outcome measures are designed to provide an objective assessment of a patient's health status, for example in urology many of the surveys ask specific questions about urinary symptoms, eg. how many times did you have to get up in the night to urinate, or in orthopedics whether you had difficulty performing specific tasks related to your joint.

Some aspects are always going to be subjective (eg. impact on quality of life or pain) and IMHO that's OK and we should absolutely attempt to measure them, not least because that information could help inform the treatment itself. Also, by measuring the changes in response over time for a patient, you can attempt to control for individual biases.

I agree that patient satisfaction surveys (in the UK, categorized as patient reported experience measures) can be very prone to bias and while important, are not necessarily correlated with outcomes.


The trouble with that is it doesn't establish causality. Did the patient's symptoms improve because of the provider's intervention or in spite of it? Plus you can't force patients to respond to the survey so there's no way to know if you have a representative sample. In my subjective experience, patients that are happy and improving are more likely to answer those questions than patients who are unhappy and ill.


X people per hour for over a few years averages out much random noise. Even surgeons see on average multiple people per day over a few years which is still recent enough to be relevant.


The real issue here is the selection bias in the caseloads of good vs poor physicians. In psychological treatment teams it's common for the caseload to be (implicitly or explicitly) allocated based on the perceived skill of team members. Note — this may not necessarily correlate with their actual skill, but it still screws up any estimates of provider performance unless you have very good prognostic indicators of outcome from before treatment (and you likely won't).


No it doesn't average out at all due to persistent differences in patient populations across providers which can't be adequately controlled for using the available data.


You misunderstand. You have a representative sample of the patients seen by that doctor. Individual differences like patient weight may be very important at the individual level, but across a thousands of people that's far less important than the overall differences across populations.


No that's simply not how it works and won't give you an accurate picture of quality differences between providers. I don't know how to make it any more clear.


Comparisons between providers is a separate issue.

Suppose you assigned people randomly to 101 doctors from 2 populations (A,B). Now suppose A was 10x as likely to die. D(0) get's 0% of A's and 100% B's. D(1) is 1A and 99B. All the way to D(100) that only get's B's.

In that admittedly simplified example you could determine that D(0) did a better job than D(100) by only getting 8x as many deaths even if 9.8x may be statistically irrelevant.

Yes, the real world is vastly more complex. But, while that may make a strict ordering impossible you can likely find out the best doctor is likely in the top quarter and the worst doctor very likely in the bottom quarter, which can be useful.

Picture a score card that said 80% chance in (0% - 20%], 15% chance in the (25%-50%], etc. That's not exactly meaningless information.


Nope that doesn't work because there's still no way to reliably control for the confounding factors. We don't even know what all the confounding factors are. Patients aren't randomly assigned to providers.


> Patients aren't randomly assigned to providers.

Sure, we don't have that information today. But, if we want to collect relevant information we could easily have a subset of doctors with random patient assignment. IMO, that's simply an implementation detail required if we want good information.

PS: US healthcare spending is over 3 trillion per year, rationally even minor improvements are worth large investments.


None of that is actually easy. Spend a few years working in a clinical environment and then tell us how easily it could be done.


I do work in healthcare for HHS this is very much the kind of thing we do. Change is hard, data collection is significantly easier.


I imagine the time aspect is especially important in some areas. For example patient impressions of orthopedic surgery is guaranteed to be terrible in the short term, but it's the long term outcome--eg mobility--that might really matter, after everything heals and recovers.


Few initiatives are as bad as they sound as PROMS (patient reported outcomes). Very good for people who like power lunches, excel spreadsheets and having numbers to vocalize - worse than useless for patients and physicians.


There is another danger related to your second point. If you can't completely control for the risk of a patient, then you create an incentive for doctors to turn away high risk patients. This effect has been observed in real life: https://www.kellogg.northwestern.edu/faculty/satterthwaite/R...


I did some market research back in the day and there was a lot negative opinions on even attempting to measure performance based on patient outcomes because “each case is unique”.

To me there were two things at play: possibly the sample size of number of patients doctors see with the same problem might be too small and there was genuinely a lot of fear that the doctor’s skill could be boiled down to protocols and statistics.

I think the biggest hurdle toward improving patient outcomes will be alleviating the fear and then after that figuring out how to give better context to the data.


As someone who works in education, I find it interesting that "each case is unique" hasn't prevented outcome based metrics in education.


Isn't this a common objection by teacher unions when pushing against using metrics for retention and compensation?


They may be right, but they have a very strong prior interest in this matter (that there can exist no objective measure of a teachers competence), so not the best arbiter of objective insights here.


I know most of HN hates unions, but teachers unions are usually there to create a better workplace, which ultimately makes for better teaching outcomes.

The difference between a 24 student class and a 35 student class is huge in terms of educational outcomes, yet here in the US we tend to cut spending on and underfund education, either by ripping away stable revenue sources like property tax and replacing that with lottery income (which varies wildly from year to year), or by just slowly reducing the amount of money allocated toward schools year after year.

Hence how you end up with places like Seattle having well funded schools due to local ballot measures, yet a similar school in Eastern Washington will get 20% to 30% less funding, and have abysmal results due to it.

Proper education isn't a technology problem, but an investment problem, and rather than investing in schools today, we would rather lock many of those students up in the future here in the US.


>but teachers unions are usually there to create a better workplace, which ultimately makes for better teaching outcomes.

Unions exist to lobby for their members; teaching unions represent teachers, not students. All sorts of things might be in the interests of students but contrary to the interests of teachers.

The obvious example is the New York City Department of Education's "reassignment centers". About 600 teachers in NYC are paid their full salary to sit in an office doing nothing. They're not sufficiently trusted to teach due to allegations of misconduct, but neither can they be fired due to strict tenure rules. This system is believed to cost $65m a year. Nobody is happy with the system, but the only way it can meaningfully change is if it becomes easier for the NYC Department of Education to fire teachers.

It's entirely plausible that pupils might be served better by teachers who are brutally over-worked and subject to constant management scrutiny. We know from the private sector that wringing your employees dry is often a very effective way of improving the bottom line. That wouldn't necessarily be the right thing to do, but it simply doesn't follow that what's good for teachers is always what's good for students.

https://en.wikipedia.org/wiki/Reassignment_centers


> Nobody is happy with the system, but the only way it can meaningfully change is if it becomes easier for the NYC Department of Education to fire teachers.

It could meaningfully change if the NYC Department of Education paid their share of arbitration fees, or if the DoE and the union agreed to hire more arbitrators and work through the backlog of cases, or if the DoE got better at reassigning those teachers who have been cleared (many of those in the reassignment centres are teachers who were cleared in their arbitration hearings but the DoE haven't assigned to teaching positions). Partly this is because the DoE doesn't really believe in innocent until proven guilty, and part of that is because they don't trust their own bureaucrats to correctly document genuine misconduct and prove it to the standards the arbitrators require. Fixing that is absolutely on the DoE.

What skilled, dilligent person would want to work in a field that consists mostly of unsupervised interaction with children if one unsupported allegation of misconduct could be a career-ender? That's where you'd be without the arbitration agreement. In much of the rest of the world that's how firing people from any job works - you can't do it without evidence. Somehow those countries manage to be productive. A competent department of education would be able to do the same. Giving an incompetent department more ability to fire people would not be progress.


>Unions exist to lobby for their members; teaching unions represent teachers, not students. //

In the UK I have some knowledge of the work of one of the largest teaching unions the NASUWT.

The members are teachers. They're primarily teachers because they have a passion for teaching. As such their union actively works to improve _learning_ in schools and so lobbies for activity that benefits pupils.

An example. Pupils get a certain amount of money allocated, that money is allocated politically and in a devicive way but does not impact on wages at all, nonetheless the union lobbies for fairer and more transparent distribution of such pupil stipends.

The union does help it's members, but it also does research, shares best practice, and I dare say other things, that help improve _teaching_ too.

They also campaign on social issue - racism, sexism, forced marriage, ... even when it's not directly related to teaching.

Perhaps your characterisation is true of specific teaching unions in USA?

The issue with the Reassignment centres appears to be they can't sack teachers for doing unlawful things without a trial (seems reasonable, presumption of innocence), they lack evidence to go to trial, but they feel the teacher can't be trusted.

What's your solution? Sack people without due process? So sack teachers when there's a suggestion they might have done something wrong? In situations where pupils can easily make false allegations (without repercussions on themselves) that enables pupils to get teachers fired very easily; is that fair in your opinion?


> Unions exist to lobby for their members; teaching unions represent teachers, not students.

And what do teachers want? Educate students to the best of their ability. Yes, I'm giving the benefit of the doubt here that many teachers actually want to do the educate and ain't just in it to "get rich" because if one's sole motivation is making money then god knows there are a myriad of better choices than becoming a teacher.

Imho this representation of unions as "oh so greedy money grubbers who only do it for the money" really irks me, especially considering that the original reason for unions actually being a thing was to counter exactly such "greed money grubbing" ways by exploiting labor in mass when it was still living in serfdom or straight up slavery.

Unions do not just serve the purpose of "making union members richer", even if that has become the de-facto reality in some places. Without unions it would be doubtful anything like workplace safety regulations would ever have become a thing, many standards for healthy and productive working, we take today for granted, are the direct result of unionization in some way or another.


Your numbers indicate the average salary for these 600 teachers is over $108k, I think your numbers are wrong.


I am not OP, but it only indicates that salary + benefits + HR / admin costs + space per person = $108K, which probably equates to a salary closer to $60K.


It isn't about "hating" unions, or otherwise, but about recognizing the simple fact that unions aren't magical unicorns, they are interest organisations and the teachers' unions represent the interests of the teachers. Now, those interests are often aligned with the interests of the students, but students' interests do not flow from teachers' interests.

To get back on topic, teachers are opposed to objective measures because if you can objectively tell if a teacher is bad, the union will lose influence over pay and employment security, which is where their power is anchored. Thus, they will of course promote studies that show the failures of such objective measures and try to discredit those that show the opposite. Just like any other interest organisation.

That doesn't mean that I have a problem with the existence of interest organisations, but let's not pretend they are something they're not.


> To get back on topic, teachers are opposed to objective measures because if you can objectively tell if a teacher is bad,

Or that since there exists no such measurement pushing for one is dishonest and betrays other agendas.


Of course. But the whole point is that we shouldn't treat a teachers' union as an authority on whether it does (or can) exist. And no, it's by no means an established fact that it can't.


In education, you're effectively looking for long-term financial success of your students, and you search for proxy metrics that predict it. Most of the things you need to control for are the same socioeconomic factors behavioural scientists are working on in other fields. Also: Since almost everyone goes to school, there's a lot of easy-to-anonymise data.

In medicine, I don't think there's a clear long-term metric (lifespan? financial success? happiness?), and there's at least two nasty selection bias that people generally only see a doctor when (self-report) something is wrong, and that doctors specialise. But let's say you solve those problems: What kinds of questions[1] are you going to ask that are supposed to predict satisfaction with the doctor?

[1]: https://londoncalling.co/2013/02/simple-customer-feedback-id...


I don't see "long-term financial success" as being obviously the right metric for judging education at all. It's worth considering, absolutely! But people choose to forego maximum wealth for other priorities all the time, and often for very desirable reasons (both individually and socially). Why shouldn't the metric have something to do with long-term personal fulfillment and happiness? Why shouldn't it have something to do with long-term contributions to society?

I agree with you that the more abundant data (and reduced selection bias) available for schools makes measurements easier for them than for medicine, but I don't think that it's any easier to define an appropriate metric for judging them.


HN really likes to talk money over everything, in spite of how money is merely a means to an end.

I think a better metric would take into account overall health history, years paid into Social Security (as a proxy for income), and food/housing stability. Look at those three factors, and you can get a grip on how a person's life has gone in a way you can compare to others.


I sure hope my kids` teachers don't share your views of their success metric.


Teacher-based metrics in schools turn out to have extreme variance.

In cases where the same teacher was subject to multiple assessments (there's a Math Nerd writing for Bloomberg who presented this, I think at TED), there was no correlation between the multiple measurements.

Straight-up noise.

Cathy O'Neil, "MathBabe", "Algorithms are Opinions in Code".

Starting at 2m22s here: https://www.youtube.com/watch?v=_2u_eHHzRto


A very simple hypothesis as to why doctors can argue this effectively but teachers cannot is that teachers are not as powerful a lobbying group as doctors.

Doctors have effectively been able to defend their "turf," from hostile encroachment, while teachers have not, not because the situations do not contain substantial parallels, but rather because doctors are politically strong while teachers are politically weak.


The term "Gods in white" exists for a reason, many people ascribe a lot of authority and knowledge to physicians and doctors in general. Doubting your doctor's diagnosis is usually considered a rather odd thing to do and if it's a doctor of psychology it could even be interpreted as a symptom.

While with teachers it's kinda the opposite; Even tho their whole job is to know and teach things, many people have a way easier time disagreeing with them straight out of principle.

Wonder how much of that boils down to socialization aka in what contexts children are introduced to these professions?


Opposition to most doctors is low, they aren't being forced to be rated by their clients, whereas most teachers have a weak union in a district with poor funding, and thus they have to contend with overloaded classes that result in poor educational outcomes that lead to most students not going on to do much with their lives.

Its much easier to punch down than it is to punch up!


I find it interesting that teachers have a union and doctors have a professional association. I think that distinction goes a long ways in the comparison.


Yes people have come up with these metrics (mostly bureaucrats with a fetish for anything that looks like data).

As far as I know no-one has come up with a metric that's actually useful for students and teachers.


It really should but I think thats a much more difficult problem to politically explain why every case is unique!


Their fear (for their own professional standing and income stream) is justified, but the objections to measuring performance are special pleading.

1) "Each patient is unique" just like "each student is unique"...but we still give standardized tests (to the grumbling of below-average teachers and administrators)

2) If the sample size is too small because the disease is rare, there may be a genuine measurement problem. If the doctor or hospital's sample size is too small but the illness is common, then they should be referring those patients to providers/facilities with the requisite experience levels.


There is substantial argument that our focus on standardized testing is literally ruining the advantages traditionally associated with a Western Education.

But then, perhaps I would be a "below-average" teacher, and my opinion might therefore similarly be discarded as worthless. We'll never know for certain, because I don't make a habit of boarding sinking ships - especially when I'm slated to receive blame for their sinking after boarding them. (And I did at one point very much want to enter the field of secondary school education.)

(Slight modification: I won't board a sinking ship and take blame for it's sinking after the fact without some substantial advantage, such as excellent remuneration, being offered as well - teaching offers no such advantage, except perhaps self-actualization, and I can neither eat nor sleep in that.)


Sure, bad comparison. It's consistent to support standardized rating of doctors but not teachers. Teaching exists to enrich, whereas medicine properly understood only exists to solve problems.

(I find it curious that another poster takes the opposite tack, claiming that teaching exists only to maximize the financial success of students, whereas medicine has a more nuanced end. I confess to finding this position truly bizarre.)


For what it's worth, I've heard a whole lot of above-average teachers complain about standardized testing, too. The things that one does in a classroom to optimize standardized testing performance are often not the same things one would do to optimize creative problem solving, or independent learning, or good citizenship, none of which are captured by (existing?) standardized tests. For that matter, the things that a third-grade teacher would do to optimize third-grade standardized test scores aren't necessarily the same things that the same teacher would do to optimize twelfth-grade standardized test scores.


Having worked a little bit on #2, even for relatively common conditions, stochasticity still plays a major role in outcomes.

For example, I have a simple model that ends up showing that an ICU doing everything it's supposed to do (patient isolation, high hand hygiene compliance, perfect diagnosis, etc.) can have a four-fold difference in MRSA infections over a year by chance alone.


If a common disease has not sufficient experience in a hospital I'd argue they should pull in experts from elsewhere and try to gain the experience necessary, otherwise you just end up exporting the problem.


Case mix and case severity are real concerns. Physicians (and hospitals) vary tremendously in each, and often the best physicians get the hardest cases, which, if not accounted for in metrics, make them look bad.

Data are also often tremendously thin -- I've seen profiling efforts over tens of millions of patient-lives in which a given provider may only have a handful of records. That's not enough to draw strong inferences, and year-to-year variance is going to be tremendous.

It also turns out that there are very strong clusterings of patterns around facilities: a senior physician or surgeon can (and does) drive practice, quality, and methods for much of the rest of the medical staff.

The single best predictor of quality I recall was volume of procedures. Practice (and standardisation) help tremendously.


I agree. As an engineer or programmer, you tend to get immediate and pretty conclusive feedback about whether the system you are building works.

You can iterate and fix your mistakes.

My experience with the world of medicine is that it is way less scientific and evidence based. This is because it can be more complex, but it does lead to doctors being able to ignore the fact that their treatments don't work.

Bad doctors can bury their mistakes and blame other factors, whereas bad engineers quickly get found out.


I’m not so sure about that. How many programmers that are senior build up a system only to have it nearly fall to pieces after they leave?


Software maintainability is hard to measure and test, and programmers who are bad at writing maintainable software can hide in the noise. But we do at least have some level of objective accountability in terms of "can the program do the thing?" Doctors largely don't even have that much.


I think that holds true for Politicians also especially since there is no standard framework to see whether what they have done/built had worked.


One of the hard truths you have here is outcomes != ability. Treatment compliance is almost always a much bigger factor. Compliance can also be greatly affected by the patient experience and relationship with the care provider.

I think the thing we really need is a better patient experience model that maximizes trust and information transfer and helps ensure compliance. (Disclaimer: I led a team building one of the top patient experience platforms for several years)


That is probably the biggest takeaway for me from the article. The insufferable Dr. Warwick gets his patients to comply. That is one of his cutting edge skills.


Another possibility is that Warwick's manner might be acting as a filter for patients who are already determined to go the extra mile for their health, and causing any other patients to leave.

Perhaps any doctor would have better stats if they got to have the same pool of patients as those who remained with Warwick, without having other less motivated patients to drag them down.

We can't know if this is the case until we collect the data on patients who left Warwick's care.


> I often think about how we lack truely objective assessment of patient outcomes

We don't really need truely objective assessments. As an analogy, imagine we randomly added or subtracted 3 minutes to the finishing times of all the runners in the Boston marathon. We wouldn't have a truely objective assessment of each runner, but it's still a useful guide about who is likely to beat whom in the next marathon.

> - Each patient's case is unique and cases with the highest levels of difficulty are often treated by the most experienced people. These cases, of course, are likely to have worse outcomes than simple cases which may be treated by less experienced (worse?) physicians.

This problem can be alleviated to a large degree by risk-adjusting the outcomes: For each patient, estimate the most likely outcome and compare the actual outcome to the estimate.

> - Clinical outcomes are largely recorded by the same people treating the patient so reported outcomes are often erroneous or frankly fraudulent.

I don't see this as an insurmountable obstacle. For example, you could have an independent body randomly sample the reported outcomes to check that they are accurate, and apply some kind of penalty if they are not.


> We don't really need truely objective assessments. As an analogy, imagine we randomly added or subtracted 3 minutes to the finishing times of all the runners in the Boston marathon. We wouldn't have a truely objective assessment of each runner, but it's still a useful guide about who is likely to beat whom in the next marathon.

I don't think the analogy fits. There's no single metric (like runner speed/time) to evaluate competence.

> This problem can be alleviated to a large degree by risk-adjusting the outcomes: For each patient, estimate the most likely outcome and compare the actual outcome to the estimate.

Agree.

> I don't see this as an insurmountable obstacle. For example, you could have an independent body randomly sample the reported outcomes to check that they are accurate, and apply some kind of penalty if they are not.

That would be a good solution. Part of the problem is finding a truely independent body. Medicine is over-run with various governance and regulatory bodies (several of which have been shown to be little more than rent collectors). And again, there's the problem of deference to eminence combined with small in-bred communities within each speciality or sub-specialty (who would likely be the only people with the training to evaluate their peers reliably).


> I don't think the analogy fits. There's no single metric (like runner speed/time) to evaluate competence.

Let me suggest such a metric: A T-value. That is, the degree to which the actual outcome deviates from the expected outcome.

For example, say you have a hospital that does stem cell transplants. For each patient before the treatment you assess the "chance that the patient will die within 1 year" based on that patient's age, sex, BMI, heart and lung function, type of disease, time since last relapse, quality of donor match, etc. From this you estimate that 19.6% of the hospital's patients who are treated over a particular period will die within 1 year, with a standard deviation of 3.7%. The actual mortality rate for this group of patients turns out to be 26.2%. So the T-value is +1.78; this is the 'single metric' used to evaluate the competence of the hospital.

> there's the problem of deference to eminence combined with small in-bred communities within each speciality or sub-specialty (who would likely be the only people with the training to evaluate their peers reliably).

Keep in mind that expertise is only required in estimating pre-treatment "chance that the patient will die within 1 year". Determining whether a patient is alive after a year and calculating the T-value can be done by anyone. And even when estimating expected mortality, there is plenty of evidence that a simple algorithm can actually beat the experts - Daniel Kahneman devotes a whole chapter to this point in 'Thinking Fast and Slow' [1].

[1] Chapter 21: https://www.amazon.com/dp/B00555X8OA/ref=dp-kindle-redirect?...


Overall, I like the idea of a metric such as the one you describe. However, a change in one year mortality rate is unlikely to be a reliable indicator of the efficiency of most treatments. Most medical interventions have more modest effects than definite improvement in short term survival.


Sure, there might be better metrics than 1-year survival and different treatments could have different metrics.


Nice theory, but in reality there's no way to do accurate risk adjustment. We just don't have enough data and so many physicians would end up unfairly penalized due to factors outside their control. Perhaps in 100 years we'll be in a better position to do something like that.


Here's a hypothetical question: Suppose I examine the performance of 100 doctors and I rate ten doctors as being the very best, and ten as being the very worst. Suppose that 9 out of the ten 'best' doctors are indeed the best, but one is in fact just average - she doesn't deserve to be rated in the top 10. Similarly 9 of the ten 'worst' doctors are indeed exceptionally bad, but one is actually no worse than average.

If I were to publish my ratings, one doctor will be unfairly penalized. However, hundreds of patients will benefit by being able to switch to a better doctor. In this hypothetical situation, do the rights of one doctor outweigh the rights of so many patients?


Even as a hypothetical that's an unrealistic question. Since we have a shortage of doctors in many regions and specialties due to price controls and educational pipeline bottlenecks, even if it was possible to rate doctors with a reasonable degree of accuracy (and in reality this generally isn't possible) some patients would still end up getting stuck with the worst performers. The only difference is that instead of the selection being mostly random as it is now, under your hypothetical the patients who would switch to the best doctors would be the wealthiest and best informed; the poor, elderly, and illiterate would continue to get screwed. Is that really the outcome you want?


Now who is being unrealistic? I accept that demand for medical services is not as elastic as in other industries, but to suggest that exposing a group of poorly-performing doctors would have no effect on their patient numbers seems rather far fetched. Even if that were the case, the sudden loss of rich patients might make a few of these doctors consider early retirement.


>Since we have a shortage of doctors in many regions and specialties

We also have record levels of advertising by not only drugmakers but also doctors and hospitals. If there is a shortage of providers, why is so much being spent on demand generation efforts?


Because healthcare industry incentives are aren't aligned to optimize for Quality Adjusted Life Years per dollar. Some procedures and drugs are highly profitable because of high demand by patients with money, but those aren't necessarily the ones that society actually needs.


Is this an argument against tracking and releasing data? Let people do their own risk adjustments.

If twice as many people who are going to one doctor or hospital are ending up dead or crippled, and there are no discernible confounding factors (age, socioeconomic, co-morbidities, etc), then I want to know. One might even say the patient has a right to know. The burden should be on the provider/facility to convince the patient to trust them nonetheless.


Yes this is an argument against releasing data. You and most other patients lack the skills and context to interpret the data in a meaningful way. And no provider currently captures all of the confounding factors. In fact, for most serious conditions we haven't even done enough research to understand what all the confounding factors are.

If we go with your proposal then the inevitable outcome is that the best providers (particularly surgeons) will engage in metrics arbitrage by refusing to treat patients whose co-morbidities and complications aren't adequately captured by standardized coding systems and clinical guidelines. Is that really the outcome you want?

As for putting the burden on providers to convince patients to trust them, good luck with that. We currently have shortages of providers in many areas and specialties due to price fixing and supply constraints. So most patients have to take what they can get regardless of trust.


What about, say a statistician? Good odds that a statistician or mathematician would have a better grasp of what the numbers imply than a medical practitioner.

Based on the doctors I know, the medical world does not have an especially profound understanding of how to deal with large amounts of data. Their specialities are diagnosis or specialist surgery, not data.

If the data is so scary we have to hide it then there are glaring problems that need to be addressed. Sure there are misleading edge cases; but believing that accurate data will be worse than word of mouth is, quite frankly, unwarrented.


I'm an epidemiologist that works on exactly this field, and I can tell you right now, with full access to the data, it's still hard to get a handle on.


Having statistical skills is helpful, but statistics only gives part of the picture. Patients often have bad outcomes despite providers adhering to the best current standard of care. And that doesn't just fall out of the noise because there are persistent differences in the patient populations between providers. We have no reliable way to identify and quantify all of those confounding factors. The data and the clinical research simply doesn't exist yet.


Your views on bodily autonomy seem bizarre. Patients own their own bodies and have the right to make fully informed decisions about what happens to them.

If a provider's or facility's (or entire speciality's) numbers look bad, that's important information for the consumer to possess. Hiding the numbers should not be an option.

Or do you think that car crash test results, and airplane crash data, and health department inspection findings should also be kept secret?

More problematically, if you can't articulate a quantifiable standard that will indicate a provider's quality level, then then entire concept seems ill-defined. I.e., when you refer to the "best providers", what does that even mean?


> Your views on bodily autonomy seem bizarre. Patients own their own bodies and have the right to make fully informed decisions about what happens to them

Their statement only seem bizarre when you view it from the perspective of individualism. Another way to look at it is - let me (ab)use Star Trek quote: "The needs of the many outweigh the needs of the few". You seem to make absolute statements that in fact are relative and rooted in one ideology you chose to follow.

> If a provider's or facility's (or entire speciality's) numbers look bad, that's important information for the consumer to possess. Hiding the numbers should not be an option.

Forgive me the harshness - I believe this is a simplistic way of looking at the problem. OP clearly showed systemic forces at play, and those are important to consider. You're saying "hiding the numbers", as if they were perfect numbers hidden away in a safe. Problem is there are not, and even when they might be very nuanced. A point raised by OP: "best providers (particularly surgeons) will engage in metrics arbitrage by refusing to treat patients" - I think it is a perfectly reasonable threat, which you chose to ignore.

> Or do you think that car crash test results, and airplane crash data, and health department inspection findings should also be kept secret?

Again going back to OP's example - doctor's might be incentivised not to treat patients which make their numbers look bad. None of your examples share similarity in this sense (and possibly in many more).


>doctor's might be incentivised not to treat patients which make their numbers look bad.

If your concern is with metrics arbitrage, then it is not necessary to use quantitative metrics to satisfy concerns about bodily autonomy. One might instead require providers to furnish anonymized records of all the adverse events their patients have experienced, along with any mitigating factors they think absolve them of responsibility.

I think the issue of quantification is actually a red herring. A simpler example will indicate whether or not there are differences in our ethical intuition. I am scheduled for a "routine" surgery, but yesterday's patient who was having the same surgery by the same doctor had a major artery sliced, bled out and died. All the staff are aware of what happened, but no one tells me. In fact, even if I ask, people are instructed to say nothing. If I possessed this information, I would almost certainly decide not to proceed. I go ahead with the procedure based on the understanding that death is a much more remote possibility than those treating me happen to believe. Is this or is this not a violation of my bodily autonomy?


You've got to be kidding. Providers are never going to do the extra work to anonymize data and compile the reports you're looking for. We already have a shortage of providers. Who exactly do you think is going to pay for that extra work? I've dealt with clinical records firsthand and in the general case it's simply impossible to automate anonymization. Plus coding all those qualifiers and mitigating factors takes a huge amount of time.


>Who exactly do you think is going to pay for that extra work?

If it puts out of business doctors and facilities (perhaps even specialties?) whose patients agree to care based on grossly inaccurate understandings of their track record, then it would more than pay for itself.


Then we're fortunate that you're not in charge of anything important.


Bodily autonomy is irrelevant here. As a sane adult patient you're always free to decline treatment. You're also welcome to ask your providers for any data they can legally release. But in reality most providers simply don't have the data you're looking for and have no incentive to give it to you even if they did have it. Then if you need care what are you going to do?

You have hit upon the core problem though. Outside of a few limited areas where we have clear evidence-based medicine guidelines there is no reliable quantifiable standard for measuring provider quality.


See: When Consumer Reports decided to rate the "Best Hospitals" based on their infection rates, while ignoring all kinds of other factors that meant some random rural clinics were awesome, while Mayo, Johns Hopkins, etc. got kicked in the teeth.


As someone working in one of "America's best hospitals" I can confirm that those consumer reports are a complete joke.


Yes that's part of the problem. But in fairness to Consumer Reports, some of the major teaching hospitals were actually terrible at adhering to EBM guidelines for preventing secondary infections. And this was simple stuff like washing hands and sterilizing treatment sites.


The problem is not randomly adding a couple of minutes that you can average out again, but systemic biases like the one your parent mentioned: difficult cases are treated by different doctors than easier cases.


Don't focus on the complicated ones (the old people) where things like 5-year survival rates are so muddled up in overlapping conditions and inevitable decline. I once presented at an ER at 4am as a 23yo with every symptom of appendicitis. They shaved me, but then the ultrasound tech arrived for work early. It turned out to be something else and I didn't need surgery. They admitted that 20% of diagnosed appendicitis turn out to be something else once they cut in, many still needing surgery but some no. Ten years later I related this story to another surgeon and he was downright angry at 20%. His hospital had that stat down to 10% (in men) and wanted it down to 5%. Those stats, basic conditions with standard treatments but perhaps a complicated diagnosis, are the better starting point. I'd bet good money that the hospital at 5% for appendicitis diagnosis is also doing better at the more complex stuff too.


In fairness, misdiagnosis of appendicitis is largely down to utilisation of technology. Most modern institutions utilise imaging (ultrasound/CT) and this results in a high specificity for appendicitis (especially CT, which doesn't suffer from the user variability that ultrasound does).

Old school surgeons will insist that they can reliably make the diagnosis without imaging (and that's probably where you get a 20% false positive from). Some institutions will push this notion too - either to reduce imaging expenditure or increase surgical turnover (sad but true).


>> to reduce imaging expenditure or increase surgical turnover.

That opens a pile of other issues. For reference, when I showed up at the ER it was a Canadian hospital. They called the surgeon in early, for me. I think word went out that they were spinning up an OR early and the ultrasound tech rushed in. So cost and turnover weren't really an issue. Everyone was salary, the equipment owned, and nobody standing to profit by cutting into me without need. Frankly, everyone had a good laugh at how quickly the system came together. But I did fall into that category of people who showing up on evenings weekends that, statistically, don't do as well.


When is knowingly not following best practices actually malpractice? I wonder if ambulance chasers don't have an actually useful role to play in that fight...


The first part of your comment is a great question.

Its extremely hard to definitively define best practises. You'd be surprised how many practitioners still argue for what look like clearly outdated ways of doing things. There's also the argument that in some people's hands an outdated way of doing things is safer than a newer (better?) way of doing things in their hands. eg. A surgeon who has been practising for 30 years but only recently learned to use a laparoscope.

I doubt ambulance chasers do anything other than muddy the waters and enrich themselves at the healthcare systems expense.


>The distinct lack of verifiable, objective markers of physician competence.

This is the same for software, and is equally annoying (especially as a self-taught Dev), but physicians at least have the assessment of people who have had to endure intellectually difficult course work. You also have several assessments of direct physician ability (boards) and possibly other things.

As for software, I can basically follow tutorials, watch some videos, maybe do a few small projects, and then be able to convince someone to actually give me a job. That's my only measurement -- whether someone will pay for my skills.

You guys are much closer to objectively verifying someone's ability, so hopefully that doesn't seem as vexing for you.


> Each patient's case is unique and cases with the highest levels of difficulty are often treated by the most experienced people. These cases, of course, are likely to have worse outcomes than simple cases which may be treated by less experienced (worse?) physicians.

Every individual is unique and yet we are able to segment folks using multiple variables and draw predictions for a bunch of things across businesses - it's just like saying "every shopper online is unique", which is true, yet we still find ways to influence them of target them in somewhat meaningful ways.

You should be able to do the same with patients, by clustering them in larger groups that make sense (for cancer many trials refer to specific mutations nowadays to define whether a patient is expected to get efficacy from a specific drug).

The only question in the end is whether or not you have enough sample size for each doctor. Maybe not. Then it would make sense to at least draw comparisons between groups of specialists across multiple hospitals, at least.

Even if metrics are imperfect to begin with, there is no excuse not to start at least, and improve over time.


I'm curious about your thoughts on a hypothetical. Imagine we have a person who, from childhood, is given daily observation, care, assessment and treatment by the truly 'best' doctors available -- assuming we were able to develop a means to measure that. But aside from that he would live his life in a relatively average upper class way in terms of diet/nutrition, activity, and so on. What would you expect to change? How would you expect his life expectancy to vary against the general population? What if we take general population to mean the general population of Japan?

This is relevant as I think the views here help determine where the focus in healthcare ought be.


Sure, optimising environmental factors would undoubtably be helpful and is a laudable goal.


> Medicine is parochial so clinical practises for the same disorder vary tremendously

Oh my, I've been suffering for back/neck pain for years. I will randomly get flare-up (usually when it gets cold or when I do sport) and things will get worse and worse. I've been going to different PT/phisio/osteopath for the last 10 years and nothing never changed. My last doctor told me an X-ray wouldn't do much and sent me to another PT...

Now I'm considering visiting a chiropractor eventhough reddit has told me not to trust them... But I mostly just endure the pain every day.


I don't know what you're going through, but for a friend with sciatica (or what the consensus was!), they went through several doctors, health foods, drugs and chiro. The Chiro helped a little but the pain largely remained. They tried fitness (mostly running) and it cleared most of the pain after two weeks of exercise.

I understand the pain, keep trying and try whatever's best for you.


There's no easy answer for back/neck pain.

I will tell you that yoga/pilates/other non-ballistic, moderate resistance exercise that targets stabiliser muscles helps many people.

I would avoid surgery at all costs unless you have cord compression, a focal lesion or a fracture.

I'd be very wary of chiropractors - I've seen several patients with vertebral artery dissections (with incapacitating brain stem strokes!) because of neck manipulation. Of course, I have a selection bias but even a small risk for this seams like too much.


Unsolicited suggestion: try https://www.regenexx.com/

I have following Chris Centeno, M.D. for some years. As far as I know he tries all/some approaches including stem cells, chiropractors, exercises etc.

If you do end you being treated by them, let me know the outcome - good or bad.


Randomize patients to providers with the same certification level in delineated service areas.

It will require formalizing the informal process you describe (whereby "cases with the highest levels of difficulty are often treated by the most experienced people"), most likely by creating additional certification levels.


Do you feel the various evidence based medicine initiatives can not addressing these issues effectively?


I have this same conversation with my non-healthcare friends almost yearly. You touched on everything I saw, but I'm not sure I understand what you mean by:

> Reported outcomes are often erroneous or frankly fraudulent

Could you elaborate?


Outcomes (including complication rates) are typically recorded based on history, clinical examination and lab and imaging results.

- History and physical exam are extremely subjective.

- Medical notes and records can be massaged to sound better than reality (The patient didn't have a stroke after their operation. They had "minor lower limb weakness which we anticipate will improve over the next few weeks") or attributed to all sorts of things (patient's cognitive function hasn't worsened, they have some "fatigue" following surgery).

- People can be very selective about which studies are ordered.

- Junior and ancillary staff are reluctant to report poor outcomes about established senior people.

I'll give a real-life example: Professor X is a "world-renowned expert" on neurosurgery at an academic mecca. He claims a "1-2%" complication rate on aneurysm surgeries in the clinic and at any conferences. His residents know he performs outdated surgeries with spectacularly bad outcomes but no one would be willing to talk about this openly unless they want to torpedo their career prospects.


Probably going to sound naive, as I'm not at this stage of my training yet, but if not saying anything led to patients dying or suffering, I don't think it's worth whatever career prospects were potentially attainable. It's ethically horrendous to consider that would be my life until I too held a superiority complex.


Never underestimate the power of a mortgage to make smart people bend over backwards to not notice something.

Someone close to me, a very experienced nurse, reported some misconduct on the part of a senior physician. This was both required by law and an ethical duty.

She essentially got blackballed from that hospital as a result.


Just out of curiosity, did she report this to the hospital itself? If I were placed in that situation, it would make more sense to go to a more impartial authority, such as the state medical licensure board.


Actually what would be naive would be to think that you would be able to change the situation by whistleblowing.

And before you judge anyone too harshly, talk to me again when you have 300k of debt and you're a fourth year resident stuck in a singularly focused career pathway.

Also, the reality is most people have so much else going on to concern themselves with that these things are given little thought.


So how would you go about preventing this from happening? The false reporting of outcomes by those in power.


Honestly, the only way I can think of would be for the system to be rebuilt from the ground up with more objective, reliable, verifiable data (and data capturing mechanisms).

Accumulation of power inevitably leads to abuses of said power, at least in my experience.


I'm assuming you are in neurosurgery, a highly competitive field with a large number of consequences in any given procedure. How would this compare with cardiothoracic surgery, where (at least in the US) each physician has to publicly report their mortality rates. I know that CT surgery isn't nearly as competitive as it once was, but if you have any historical insight it would be much appreciated.


No lie this sounds like a great fit for a reputation based system of openly evaluated experiments on an immutable, open, decentralized ledger.


By "reputation based", do you mean patient report outcomes or something else? Could you expand on "openly evaluated experiments", maybe with an example?


I believe the comment you're replying to is satire (about how everything is a problem that can be solved with a blockchain).


Give participants (patients, doctors) access to subscribe to health data or symptoms (input) and create a consensus of possible treatment plans, iterate through those plans ordered by some metric (reputation, time, cost, availability).

When cure confirmed by patient, diagnosis is proven correct, data is captured, reputation increased among participants in consensus that were correct.

You could even use this as human-in-the-loop machine learning model training for open models.

If you want to explore what it might take to build this, we've just launched an alpha of our data/ml network that can facilitate a dapp built on top of Synapse https://synapse.ai/

Happy to reach out and chat more.


This is ridiculous and completely unrealistic. Most conditions are never really cured and diagnoses are seldom "proven correct". We only have varying levels of improvement and confidence. Patients generally lack the skills to confirm anything themselves, especially because most of them don't understand causality. Did my condition improve because of my physician's treatment, or in spite of it?


How would you improve the pipeline?


Increase funding for large scale clinical studies so that we have sufficient data to develop evidence-based medicine guidelines for a wider range of conditions.


> Give participants (patients, doctors) access to subscribe to health data or symptoms (input) and create a consensus of possible treatment plans, iterate through those plans ordered by some metric (reputation, time, cost, availability).

I'm still not sure I understand what you mean. Could you give an example?

> When cure confirmed by patient, diagnosis is proven correct, data is captured, reputation increased among participants in consensus that were correct.

It's not that simple. Most patients are not either 'cured' or 'not cured'. There are multiple possible outcomes at each stage of diagnosis and management.

> You could even use this as human-in-the-loop machine learning model training for open models.

Doubtful. Current machine learning algorithms are painfully inept when given clinical data (outside of very limited use cases). I'd love to be proven wrong though.

> If you want to explore what it might take to build this, we've just launched an alpha of our data/ml network that can facilitate a dapp built on top of Synapse https://synapse.ai/ Happy to reach out and chat more.

Sure, email?


dan [at] synapse [dot] ai


I doubt the lack of objective assessment an unfortunate accident.

More likely things are this way because the relevant people and institutions don't want their performance measured, and they have the power to keep things this way.


Not sure what your relevant experience is, but in my experience large healthcare institutions are constantly measuring their performance.

The problem is comparing it in a useful way across different institutions.


I have zero experience from the industry, but I still want to defend my point:

"Comparing it in a useful way across different institutions" seems like a BIG deal!

Institutions revealed to be inferior would have to improve close down. Those proven to do great work would be richly rewarded, but at all levels there'd be a new level of scrutiny.

This would be of huge benefit for patients, save tons of lives etc, but also put a lot of strain on doctors and hospitals, and I can see how people with power in those fields don't want this to happen and thus don't make it happen.

It's also entirely possible I babble too much about things I know nothing about here. If so, I'd appreciate learning about it!


Exactly this. My whole job is based on the torrent of data large healthcare institutions collect. The problem is not that people aren't writing things down - it's what to make of it.


The article's author Atul Gawande more recently published a study showing that you are 3 times more likely to die if you are treated in some hospitals compared to others [1]

Both the article and study suggest what I believe is a major failing in modern medicine: We should be measuring risk-adjusted outcomes for hospital treatments, then publishing a rating for each hospital on a bell curve. With such an approach, patients will naturally gravitate towards the better hospitals and the poorer hospitals will have an incentive to improve their procedures.

Here's something to consider: According to the article, "In 1964... the median estimated age at death for patients in Matthews’s center was twenty-one years, seven times the age of patients treated elsewhere...After Warwick’s report came out, Matthews’s treatment quickly became the standard in this country." If Matthews's treatment had been less spectacular - say only twice the life expectancy of patients treated elsewhere - how much longer would it have taken for his treatment to become the standard? Perhaps never?

[1] https://www.nytimes.com/2016/12/14/business/hospitals-death-...


I the interesting thing to consider is what happens to the hospitals that do terribly, but not so poorly that they should be shut down. The publication of their rating will start causing doctors and patients to avoid them. In our fee-for-service model, this causes the hospital to starve and enter a vicious feedback loop.

Perhaps we should have an independent commission that has the ability to perform an outside investigation into these cases and attempt to understand what's going wrong. For hospitals that are under-resourced, they would have to power to increase allocations. On the flip side, it could shut down the ones that are irreparably broken that can safely be removed from a region.

Also, we should do away with fee-for-service for good. It is the worst bag of incentives this side of the line between healing and hurting.


> In our fee-for-service model, this causes the hospital to starve and enter a vicious feedback loop

Sounds like an alignment of interests between hospital management and patients.


I don't understand how this helps hospital management once it enters the feedback loop. It does incentivize them to not get there in the first place, but I bet when you analyze the tradeoffs they make to forestall it you won't be happy either.

This is especially a problem is when you have a lone hospital serving a poor area. The patients don't have money, maybe they don't have insurance. If the hospital starts cranking out bad results, it'll just deteriorate.

The market model only kinda works in theory for wealthy populations with multiple hospitals nearby.


There's a market for cheaper, less safe cars. There would be a market for cheaper, less safe hospitals - in fact there already is, we're just embarrassed to talk about it openly.


Think carefully about what you're saying. The rich get to live longer and healthier and the poor get substandard healthcare? The poor are the ones that need to work.


what's the alternative to fee for service? At the end of the day, even if you abstract out these costs to bundled payments or other things that we saw in ACA, there's still a price attached to every service provided. Wouldn't that be the only way for the government to how much to budget for healthcare costs?


The alternative is medicare for all bundled with something that looks like per capita allocations to take care of a patient population. Patients pay nothing at the point of service. Hospitals must drive down costs to make a profit. If they start cutting corners, they lose their bonuses. If they really aren't making enough money to do their job and can prove it to an independent board, their allocations can be increased. You can imagine some kind of bonus system based on YOY improvement or maintenance of patient outcomes.


That doesn't work for a single hospital in isolation since hospitals mostly only treat patients that present with serious problems. If you're going to use a capitation payment model then it has to cover a complete integrated delivery network including acute care, ambulatory care, laboratories, imaging, physical therapy, etc.


That's what I was thinking of but didn't articulate correctly. Can you elaborate? What interactions fail if the model isn't comprehensive?


There's just no practical way to do it. If a payer wanted to do capitation payments to a single isolated acute care hospital how would they even figure out what to pay and what incentive would the hospital have to take that deal? In order for something like an accountable care organisation (ACO) to work they have to control all aspects of patient care so that they have a chance to address problems early, long before the patient is admitted to the hospital.


I think this is similar to the idea of Health Justice that I've heard Tim Faust talk about. The reason the hospital will take the deal is simple, it's the only deal by law and the private insurance payers will be eliminated.


We don't live in a dictatorship and there's no way politically to force them to take that deal. Major changes won't be made to the healthcare system without at least rough consensus from the wealthiest and most powerful stakeholders, including major hospital chains. Even under single payer systems, hospitals are funded at least partially based on the actual amount of care delivered.


As you can see with the tax plan being rammed through, the minority of the wealthiest can ram through whatever they damn well please. We can do the same to them. We outnumber them.

Any yes, we can rationally allocate resources to ensure care is adequately delivered.


Well, there could be other factors too. When my dad was diagnosed with cancer, there was only one hospital in the country that had any experience in treating that specific type of cancer, so he had to go there, even though it was many hours of travel away from where he lived. If he couldn't do that, due to lack of money or resources to travel that far, he would have had to use a more local oncology department, but I can only guess that his survival chances would have been a lot lower, not because of any malicious factors, but because the doctors there had no experience treating that type of cancer and didn't have access to the latest treatments and funding for said treatments.


From the article, it seems that there are 2 opposite "evidence based medicine":

(a) do research, publish papers, test thoroughly new treatments in well defined trials and slowly build up a biological theory as well as official guidelines and treatments. And

(b): measure the outcomes of different hospitals, declare the best performer's methods as the state of the art and expand these methods elsewhere. If the measurement is sound, one can argue that they are both evidence based. The former is slow but may provide a deeper understanding of the inner workings of the deaease. The latter is fast but it's hard to tell exactly why it works so well.

The was a shift in machine learning research in recent years.

(a): write theoretical papers and study using math the generalization performance of algorithms.

(b) release a new challenging dataset every year (except a test set) and organize a prediction competition on this dataset. The winner algorithm is declared the state of the art, and can be applied to other datasets, event though Boone understands why it works so well.

The approach (b) was particularly fruitful and efficient in recent years. Let's hope that applying this approach to medicine Will lead to great outcomes!


You can't rightly say in either medicine or machine learning that the latter method is "better" than the former. You need the former to make progress past whatever local maximum your buckshot approach gets you. If you don't understand how it works, then eventually all the refinements lead to overfitting in machine learning and superstition in medicine.


Concerning your last paragraph, this article is from 2004, so technically we can examine the efficacy of this approach right now if it were utilized.


Intel also used (b), in "Copy Exactly!", including seemingly irrelevant details. They also tried to work out why it worked... but didn't let not knowing stop them from using it. In science, experimental results often precede theory anyway (and indeed incite it).

Reading the article, it struck me that simple geographical factors might also influence success rates: e.g. weather (humidity being a CF treatment), genetic factors in that population, socio-economic factors (esp. time and effort available for the intensive care required).

BTW I like to think that's not a typo for Noone, but a US colloquialism paralleling "God knows why" but instead referencing Daniel Boone.


The obvious difference here is that no one dies if I start training my new weird deep learning architecture on some challenging training set on an AWS gpu instance and said architecture turns out to be crap.

Meanwhile, medicine is a lot less forgiving of wild goose chases, and requiring that the taking of risks is soundly justified seems pretty reasonable, as exciting as the idea of unrelenting research sounds.


"One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction. At the end of the lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half of the patients?” The hall grew very quiet then. The voice at the back of the room very hesitantly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.” God, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”

—Dr. E. E. Peacock, Jr., University of Arizona College of Medicine;


I've seen this come up in EMS. There is the question for certain types of patients if it better for the EMTs to provide care on scene or just get the patient to the hospital soonest. To help determine what's better they pick days of the week where EMTs provide minimum care. It's rough to do this to patients and EMTs, but it's also important to know what truly affects patients' outcomes.


except that people are already dying. it is not the case that nobody dies now and only this newfangled method has some chance of people dying. people are dying now, and the faster we can make fewer people die, the better. even the notoriously slow and bureaucratic US drug approval process recognizes that sometimes new drugs are discovered that are so much better than the status quo that is is unethical to check if there are unknown side effects; it doesn't really matter if someone gets kidney disease ten years down the road if they have advanced cancer right now.

edit: of course I agree that the new way is not necessarily better, I'm just saying that the old way is also not necessarily better.


It isn't true that the FDA recognizes that some drugs are so beneficial it is unethical to check for side effects

The FDA approval process is slow partly because it is bureaucratic, but largely for two very important reasons: 1) because it is easier to prove effectiveness than safety (i.e. Easier to prove a positive than a negative), FDA requires rigorous standards to ensure drugs are safe (many FDA rules came into existence because people died from taking approved drugs) and 2) to prevent snake oil sales

It is difficult to imagine how pervasive and socially detrimental "snake oil salesmen" are (i.e. People who sell useless products to vulnerable sick people by taking advantage of their need for care), but without regulations this would be a huge problem


The FDA does have a fast track process, but it's more of a "we'll review your submission in two months instead of a year" kind of deal. You still have to do all the clinical trials, although in a few cases (e.g., the ebola vaccine during the recent epidemic), it is possible to overlap the trials a bit.

According to the people who actually work on these drugs, the FDA is neither excessively or insufficiently bureaucratic. It's worth remembering that only about 10% of the drugs that start the process actually finish it, and half of the submissions turn out to fail at the final, expensive step because they don't work.


Yes fast track and breakthrough destination and priority review allow you to cut review time down and get more FDA input on your dev plan, but none of them allow you to ignore getting safety data as the person I responded to said.

Accelerated approval actually does reduce the number of studies you need to do to get approval. One of the reasons cancer is such a hot space is that FDA has been lettingamy drugs get accelerated approval from phase 2 studies looking at tumor response endpoints or progression free survival, rather than overall survival, which is a more relevant endpoint but one that takes longer to measure

I work in the field and find FDA is generally ok but I've been in situations where FDA used its power to enforce its will based on internal politics rather than scientific evidence. Very discouraging but that isn't the norm


Meh I don't think your example about machine learning is particularly apropos...or even very accurate.


Brought to mind http://slatestarcodex.com/2017/08/29/my-irb-nightmare/ . Maybe our standards for a)-style evidence are too high, and it needs to be easier to do evidence-based work within clinical practice?


Before you have a procedure performed, ask your doctor a simple question: "How many times have you performed this procedure". What I have noticed doctors absolutely hate his question - I have witnessed doctors become visibly angry. I have never once had a doctor answer this question. I have heard answers from: "We don't keep those records" to "We don't keep totals". Are these records not kept? Why wouldn't this be public information? There must be practices with 0 performed procedures completed, and are you patient 1? How can doctors improve if they don't keep transparent numbers of procedures performed, procedures failed, procedures which were successful? "How can you define procedures failed?" - I can hear some say. But that is a discussion these professionals need to have to find incompetence hiding among their ranks.


The problem with this is that except for some really routine stuff, a lot of operations are really custom-tailored to the patient. This might be a 100th time the surgeon is removing a certain type of tumor, but first time doing it in an obese person, or a child, or someone with rare blood condition, or million other things. Once you get to the real life-saving stuff no two operations are the same, even though on paper they might seem like they are. That's why it's hard to put a number on it.


>This might be a 100th time the surgeon is removing a certain type of tumor

I think this is all OP was asking about. They want to know if you've ever removed a tooth before, not if you've ever removed the right molar from a 27 year old very tall man who has a small jaw and bad breath.


Sure, but I'd rather have a stomach tumor removed by the surgeon who'd performed twenty custom-tailored stomach tumor removals in the last year than the one who'd done one.

My opthalmologist used to do LASIK as part of her regular practice, before deciding it was better to refer patients to somebody who was doing the same type of operation over and over - probably getting five times as much practice or more.


Why would it matter ?

Indeed what is failed, and how is the procedure responsible for it and not care observance (ie orthopedic or even cardiac procedures that should be associated with a loss of weight/exercise or whatever, who's at fault if patient does not follow his diet ? Is it a failed procedure because of the pratician or because of the patient ?), or even simple patient physiology.

This question also makes me super angry because it really makes no sense whatsoever, and just comes from unreasonably suspicious patients who should probably just give up on using modern medicine if they can't trust a whole TEAM of doctors and caregivers.

Procedures themselves are not even the riskiest thing compared to anesthesia or even post op care (hello nosocomial infections). Also most procedures are not rocket science either, in my experience surgery is more like a makeshift job than anything.


You also want to check what the current state of the art is and if you really need your procedure right now.

Personal example : a decade ago using ultrasound to remove thyroid nodules was at the clinical trial stage with good results. But surgeons were still in the mindset of "remove everything even if you could keep half your thyroid, and enjoy your levotirox". Now ultrasound are a go: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5434558/ so if you bet on progress you won.


You should also ask "how often, recently?".

I did ask my surgeon and he answered. But he is pretty exceptional in a number of ways.


This article spent paragraphs describing a normal distribution where a couple of descriptive statistics would do (mean and standard deviation). Ok, I kid, that would be too technical. The old adage still stands though: a picture is worth a thousand words. Instead of showing us what this curve looks like, they wasted the one graphic of their column inches on a useless stethoscope Gordian knot illustration.

One a plus side, this article appears to be famous and from 2004. Somebody must have produced graphic of this bell curve of CF outcomes by hospitals. Anybody got a link?


...we know what the graph would look like. the author knows what the graph would look like, the editor knows what the graph would look like.

The article isn't about the curve proves, the article is about what the curve means.

(it also says in an early paragraph that you will see different distributions for different diseases/operations; there isn't just one curve)


I know what a normal distribution looks like, but I have seen too many flaws in data or flaws in characterization of data revealed by graphs of data. If this data and its graph is so earth shaking, show it.


Worse than that: if all hospitals were functioning perfectly, we would expect to see exactly a bell curve due to small random variations adding up.

A bell curve is not evidence that anything is wrong, and in fact it could be taken as evidence that everything is fine: a bell curve suggests that no hospital is doing remarkably badly.

So this part of the article makes it into the headline despite telling us nothing, and reduces the credibility of the whole thing.


I actually wonder if it really is a normal distribution.

A lot of people think any distribution with spread is a "bell curve". In a lot of fields like this the tails are a lot fatter than they should be in a normal distribution.

And doctors (author is a surgeon) don't get much statistical training. I was once told by a doctor that all random distributions are normal curves.


I also wanted more visuals or highlighted quotes. I opened the article, scrolled through it, saw there was no high-level answer to the titular question and closed it. I'm curious, but don't have time to read the article. ¯\_(ツ)_/¯


>> What makes the situation especially puzzling is that our system for CF care is far more sophisticated than that for most diseases. The hundred and seventeen CF centers across the country are all ultra-specialized, undergo a rigorous certification process, and have lots of experience in caring for people with CF. They all follow the same detailed guidelines for CF treatment. They all participate in research trials to figure out new and better treatments. You would think, therefore, that their results would be much the same. Yet the differences are enormous. Patients have not known this. So what happens when they find out?

Maybe this is simply correct and the variance in care is small, but the regional resources, environmental and other co-factors are distorting the patients results. If income and insurance are lower patients might have to take on more stressful jobs. If education is lower parents might not be able to care as effectively because they don't understand the physician's instructions. Other environmental cofactors in impoverished regions might worsen certain diseases but not do so in others, and so forth.

This is a very important point because it might heavily distort how good the care actually is, there was surprisingly little methodology in the article.


This one is from 2005.

But IIRC it launched Gawande's career, and I think it's a great read.

I saw him talk a few years after this. Very inspirational!


This is interesting from another angle. I'm not sure if it has been mentioned in the comments but I will go ahead and present it anyways.

I was diagnosed with anorexia when I was 14. The physical, medical care that I received at my local children's hospital was alright, and I have no comparison of course. I was refed, kept in bed, and slowly gained weight to a healthy level. However, the psychological treatment was abhorrent. I was sent to an outpatient eating disorder program, which treated both children and adults, at the same hospital. I came from that place at a healthy weight, but with the same ideas about food that I'd always had up to that point.

I no longer have an overt eating disorder, but of course they say with these things anorexics are always in recovery. However, it took me four years to learn healthy eating habits on my own. I did not have external help for this at all. It took me four years to teach myself how to eat. I still look back on the eating disorder program with distaste and distrust for the medical system.

My point is, it was very obvious to me that my psychological care was not up-to-par, but I still don't know how my medical treatment was-- that is, fixing the parts of my physical body that I'd damaged with malnutrition. Most people think of psychologists, psychiatrists, and therapists in terms of their skill and efficacy. But medical doctors are mostly assumed to be equally proficient. I am not sure what can be done about this particular assumption, but it is very real and can be life-threatening.


When it comes to chronic cases I would think that the outcomes are more measurable. For example treatment of diabetes or Lyme's disease. Word of mouth, both from patients and other doctors, the internet etc. will work relatively well depending on one's ability to separate wheat from chaff.

Any physicians reading this - feel free to contact me, I'm looking for physicians for the long term for my own chronic, but relatively minor health problems. I find the average physician some what mechanical on how they approach a health condition and sometime (1)dismissive of my concerns/views. Since I'm somewhat inclined to dig into details of a health condition, I read up things on the internet and that means I may be more likely to question a doctor's recommendations.

(1)= I fully understand why they would normally do it. One certainly cannot entertain a patient who forms an opinion based on the first few articles that they read up on the web, which is what I presume most people do.


That's a pretty interesting article.

While the measurement of performance is critical towards improving it, we need to be careful about how we incentivize that performance. As performance on a metric becomes more economically important, the less useful the metric is in actually measuring performance and the less real improvement can be gained by its measurement.


aka "When a measure becomes a target, it ceases to be a good measure." - Goodhart's Law [0]

[0] https://en.wikipedia.org/wiki/Goodhart%27s_law


I like this, a lot, but I feel like I'm going to start incorrectly applying this in too many areas. It's dead true, but also a bit of a catch-all for so many IT scenarios.


Only when the metric we're measuring and the dimension we're looking for improvement on differ. This concern doesn't seem to apply to measuring lung capacity.


If you tie the bonuses of the people measuring the lung capacity to that lung capacity, then you will make that metric less accurate.


It isn't necessary to be the best. What is needed is that patient outcomes get better. Those who are at the wrong end of the curve need to acknowledge that they need to do better and those in charge of the system need to assist them to improve.

Making payment conditional on results is not likely to have the desired effect, especially if the decision to pay or not is not made by the person actually affected. If anything it will simply drive medics away from difficult areas of expertise.

Take the above with the usual scepticism, I'm a software developer after all, not a medic. But it seems to me that you always need to have carrots as well as sticks, or perhaps just carrots.


If health care was free, a problem arises. Everyone will think they're entitled to the best doctor. Who gets the best doctor?

Under a market system, the best doctor goes to the patient willing to pay the most.


A bad doctor is (largely) better than no doctor, and that would be the difference for a lot of people. Being represented by a public defender isn't the best, generally, but it's a hell of a lot better than not being represented at all; moving to a purely paid legal system would not be an improvement.


The doctor chooses. In Canada, doctors can accept whoever they want and they'll only accept patients with good 'fit'. Everybody else is stuck going to a walk in clinic or the ER. How does it work in US? Do Doctors charge different rates for a checkup? I thought it was fairly opaque between the doctor and the insurance company?


> The doctor chooses.

Then there'll be a non-monetary "currency" of favors, who you know, black market, strings, etc. Such systems inevitably appear in any system of "free" scarce goods.


I'm a Canadian physician, and this is not a correct statement. Nowhere in Canada are you allowed to cherry-pick patients.


"Canadians pick their own doctors, just like Americans do. And not only that: since it all pays the same, poor Canadians have exactly the same access to the country’s top specialists that rich ones do."

https://ourfuture.org/20080204/mythbusting-canadian-health-c...

When the good doctors are in demand, and the not-so-good ones are not, how is the allocation done?


Funny thing about markets: they work even if you don't "believe" in them. So the more you try to tightly control a market, the more alternatives show up.

In this case the alternatives being the usual power networks of the socialist states: friends and relatives in government and administration. Who you know becomes more important than what you know. Who you do becomes more important than what you do. A small favor done once gets returned. Informal networks of favors, relations and power turn rapidly into an impenetrable mafia.

Imagine your DMV visits applied to life-and-death situations. And it's not only doctors. It's also access to latest medicine, newest medical devices, best hospital beds and sections and especially doctor attention.

Source: I lived in these god-forsaken systems, unlike most of the down voters here...


>Imagine your DMV visits applied to life-and-death situations.

Kinda funny, but my DMV visits have been extremely smooth. If you get there with everything they ask for (you did read the FAQ, didn't you?), there's no reason why you can't have a similar experience.

Now my experiences trying to get a doctors appointment otoh...


My DMV experience was long lines and waiting around for an employee to manually grade multiple choice tests. That was in CA. On the other hand things were a lot better, way more efficiently run in PA. Why can't California give me a similar experience as Pennsylvania?


> Imagine your DMV visits applied to life-and-death situations.

I don't have a lot of trouble at the DMV; maybe we could use an example where I've consistently had terrible experiences, like UPS?


Where have you lived?


>If health care was free, a problem arises. Everyone will think they're entitled to the best doctor. Who gets the best doctor?

Oversimplifying here, but how about setting a base line for free health care. And then if someone wants to get the 'best' doctor they can pay extra?


Same reason why not having net neutrality is a problem. Eventually health care gets terrible for the people who aren't paying, or are paying less, and you've institutionalized a two or more tier public service based on wealth.

What if somebody could pay extra to have the best firemen, or the best police?

WalterBright has posed a real problem here, but I honestly think that the solution is just that patients pick their doctors, and if they can't get the doctor they want, they can get on a waiting list for that doctor while seeing another. Additionally, that waiting list would not involve any administrator latitude or patronage, just list seniority.


On the other hand, most health care is routine, and one doctor is as good as another. The best doctors are needed for the hard cases. A strict waiting list does not allocate doctors to where they would be most effective.

It's similar to the inherent misallocations of rationing - goods & services are not allocated to where they are most needed.


Because then the people who only get the baseline become a political force for how unfair that is. (We see this in the newspaper essentially every day regarding the public school system.)


Yet the doctors who charge outrageous sums may not actually be as good as the doctors everyone has access to at the county hospital. Indeed, teaching hospitals are known to have better outcomes, and serve the underserved.


You have to look at reality and not just a some simplified model. Does this happen in countries with universal health care? I don't think so.


> I don't think so.

I asked that question of a Canadian doctor further in this thread.


As far as I know Canadians don't all rush to some superstar doctors. This is also not happening in Germany either. This is the same simplistic thinking when people say that if health care is for free they will consume more. This is simply not happening in countries where health care is free.


When a scarce resource is free in terms of dollars, other factors come into play that act as costs. Common methods are rationing, waiting times, social status, favors, donations, etc. Something has to happen to match up supply with demand.


People who can go to a doctor freely go sooner and don't (have to) wait until things get really bad - at which point treatment also gets far more complicated, lengthy and expensive, and that relationship is not linear. I don't need to wait until the tooth is truly rotten and now the jaw and bones holding it are impacted, because I have no incentive to delay and safe the expense, which is even worse for the many uninsured and under-insured.

I grew up in the GDR (East Germany) for the first 17 years of my life. While I turned out well, I had years and years of close contact with a large variety of doctors, for my eyes, sleep EEG, LOTS of small problems. I was a regular visitor of a pretty sizable variety of doctors in various places, not just the hometown.

The fears expressed hear from Americans about that kind of health care, to me, are simply ridiculous. The system worked darn well. I got soooo much, and I wasn't even all that sick, and that in a really poor country (compared to Wet Germany).

By the way, we did have a "top layer" of doctors (in East Germany) where you needed more (don't know if it was money or favors or if they selected you), who were far better than the normal ones. Still, nothing any more outrageous than now - the best are few, everywhere. Getting to them wasn't a matter of being rich though (I know because I got to see one or two of such top doctors).

Interestingly, a friend of mine found that getting the "top doctors" even now does not have to be a question of money. His wife got cancer and the standard treatment would have given her 6 months. He had money and brain, he checked the literature and did his research to find the best doctors relevant for his wife's case - and all it needed to get them to treat his wife was a few phone calls. Not a single additional Euro was spent, they didn't want any. They were cancer researchers at German university clinics, but he also checked with experts in the US. the experimental treatments she got and that gave her a few more years (instead of 6 months) were all paid for by the "Krankenkasse" (the common public insurance vs. private insurance, which exists too). They paid the huge sums without question but refused paying tiny amounts for small stuff...


For each medical condition, there exists a wide range of treatment options, all varying in efficacy, convenience, pain, risk, and cost. If cost were no consideration, why should anyone not get the most costly treatment available, even if it is only a minor improvement?


Most people don't behave that way. You seem to be assuming that people always want to squeeze everything they can out of a system but that's simply not the case in real life.


> Most people don't behave that way.

They don't? How many patients in a "free" health care system ever ask what the cost is for any of the options?

If coach / first class were offered to you, free of charge, which would you pick?

Is it a coincidence that electric cars are suddenly taking off in the UK when their TCO (Total Cost of Ownership) dropped below that of gas cars?

Building a mass market system that relies on the bulk of people behaving altruistically is not likely to work.


Maybe people go to a doctor only when they need it? I don't want more medical treatments when they are free.


There are a lot of people who will only go when they need it, mostly men. For instance I have not been to a GP in probably 6 or 7 years. But I can see lots of people with chronic conditions (most of which could be prevented or reversed with lifestyle changes) going all the time. And it seems like the kind of thing where once you have a couple prescriptions, you're going in frequently to check in, and each time possibly adding new ones for different ailments you develop.


My Doctor-couple friends said it this way: There are three ranks of hospital: childrens' hospitals are the best. Next are womens' hospitals (maternity etc). Finally hospitals catering mostly to men are the worst, mostly because men are so indifferent to treatment.


Canadians who can afford it go to other countries to get prompt care


:s/willing/able


> Under a market system, [a problem arises.] The best doctor goes to the patient [capable of paying] the most.


Exactly. Add that to the long list of reasons for working hard to earn more money. It's not just about the bigger house and the faster car -- it's also to get treated by the best doctor.

The alternative mechanism amounts to "the best doctor goes to the person who has the most connections and somehow subjugated the most people." That's why elected officials in Congress get to the front of the line.


> That's why elected officials in Congress get to the front of the line

There are more doctors than members of congress, so it's a global improvement on the status quo, unless doctors not treating Congress critters just sit around twiddling their thumbs.


It's only an improvement on the status quo if you assume that the status quo is that there are fewer doctors for some reason, or that Congresspeople create an excess of doctors than are needed to treat Congresspeople?

What is the "status quo" in this comment?


> entitled to the best doctor

The problem is "the best doctor" is essentially a lie. Modern medicine is mostly a huge fraud. Non of this hi tech crap even works when measured on the basis of lifespan.

Disease is overwhelmingly rooted in nutrition and lifestyle. This idea that doctors and their treatments matter is a big lie.

The USA blows a fifth of GDP on health care and it is obviously completely useless. Americans in practice live no longer than Mexicans or Cubans, who spend a tiny sum in comparison.

The most immediate political problem we face is starving the American medical racket out of existence. These crooks are bankrupting the country. The hospitals and doctors and "researchers" and pharma companies must be cut down to size. Starve them of a cent of federal revenue. Or prosecute them for their pervasive anti-trust violations. I don't care. Crushing the medical complex by any means available is literally the most important problem facing America. These crooks are literally stealing over a thousand dollars from every American household every month. It's unbelievable.


Reminds me of Vince DeVita approach to cancer. Never give up, do not care much about standard practice if they fail you.

Rant: the medical world is large and stretched, it's impossible to discuss anything with a doctor (you'll have more answers than doctors asked, and they won't tell you all). But whatever they say is godspel (or even more) so if they decide to drop the ball, you will. And if a sibling goes against the doctors advice, you'll only make the situation more tense.



While Warwick and Matthews are both excellent physicians, what if there use of non traditional/peer-reviewed methods ended up shortening someone's life?

How does a system as large an bureaucratic as health care allow for individual discretion and experimentation at the individual physician level without harming patients?


As a patient, i feel that there is a need for the doctors themselves to acknowledge that there is a need to measure their performance and give a way to us patients to take informed decisions. That would be step n.1. once we have established this need we can look on the how this can be done.


An interesting corollary to this article is this:

http://slatestarcodex.com/2017/08/29/my-irb-nightmare/


Everything in this story is so relevant to my current trials and tribulations with medical IRB approval. I read this monthly just because I need to relate with someone else's experiences.


The Don Berwick December 1999 speech mentioned in the article is here: https://www.youtube.com/watch?v=00aa6xcOXf4


> It belies the promise that we make to patients who become seriously ill: that they can count on the medical system to give them their very best chance at life.

Medicine makes no such promise. Doctors are going to handle any given condition with the currently accepted "reasonable and prudent" treatment. This is going to be appropriate for the vast majority of patients but it isn't necessarily everyone's "very best chance."


I predict that with rigorous study we will find that fully half of all hospitals will be below average.


Thank you for the repost, I read this article many years ago and was never able to find it again.


So why is this 13 yr old article in hacker news? Or rather how? Is there a bot among us?


It's an excellent article that illustrates the difference between 'good' and 'excellent' care and the attention to detail it requires to obtain 'excellent' results. Dr. Warwick's line of questioning that revealed his patient Janelle's reasons for not doing her treatments was illuminating and not every doctor will dig for answers that allow their treatment plan to remain 99.95% effective.

After finishing the article, I decided to take inventory of myself and see where I can improve from 99.5 to 99.95 even though my job doesn't deal with life and death patient outcomes.

Sometimes articles that aren't about computer touching can make you think about computer touching in a different way, it's good to read a variety of things.


Not a bad article, but it is from 2004. Might be time for a follow-up.


Needs [2004] in the title


Unfortunately, the title already was maxing out the character limit when I submitted.


We'll add it.


I've been thinking about some of the same issues but with dentists--all the sites to locate dentists are utter garbage and there are no reliable ratings for dentists. Worse, dentists toe (and I argue, cross) the ethical line when it comes to patient care, performing often unnecessary fillings in order to make money. The whole dental industry is a cesspool of shady vendors and salesmen pushing dentists to up sell "patients" and be able to charge bigger bills to insurance.


The depth of this article allows thinking beyond the medical domain. Warren Warwick's success because of 'a capacity to learn and adapt—and to do so faster than everyone else.' To do so, there needs to be a fierce consistency of approach to enable easy measurement, yet at the edge a wild experimentalism to ensure improvement.

Fascinating. Continuous improvement in action.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: