Hacker News new | past | comments | ask | show | jobs | submit login

I spent the past four years working as a data scientist for a healthcare company on population health initiatives, and started building out a body of research around how to engage clinicians using data (among other things, through dashboards). That's a bit different than the article, but one of my key learnings was that dashboards are often incredibly ineffective and only promulgated by well-intentioned engineers, based on what they would want to see if they were a clinician.

I worked with a behavioral economist, and we started running RCTs looking at different approaches to sharing data, and found that dashboards led to less engagement, when there was engagement it was more likely to drive ineffective interventions, and generally our dashboard groups had worse patient outcomes. Non-dashboard approaches had 20x better engagement and anywhere from 20-100% better patient outcomes (depending on the condition).

Unfortunately, both of us left the company when a bunch of engineers moved in, scoffed at our work, and immediately said "doctors need to be able to slice and dice their data" -- which, by every measure we had tested, is simply not true. But the "mission control" style thinking, where you have tons of dials and numbers flashing on a screen, pervaded because it "feels" like the right answer despite being the objectively wrong one.




Are you me? I also built out a body of research in a healthcare company about how to engage clinicians using data, that stopped us from making some mistakes until we got acquired. No doctor (or chief of staff, or chief medical officer, or anyone besides an actual data analyst) will slice and dice their data. Doctors won't even listen to a non-peer clinician discuss data with them, let alone their administrators.

In my timeline, I told the engineering team that all of their work was almost certainly for naught, that all the research said this product would completely fail, and we were basically just doing it for managerial and contractual obligations.

This gave engineering the freedom to use whatever technologies and conduct whatever technical experiments they wanted, since no-one would ever use the product, and it'd likely be shut down soon after launch for disuse.

A key hospital partner gave us a couple dozen docs to test it with. I interviewed them about how they measured their work and impact, and the data they used to improve their craft and outcomes. I asked them to review every measure on the dashboard, explain their understanding of it, and explain how their work or behavior would change based on that.

Almost to a person, the doctors said there was nothing of use to them there, as the research predicted. Some of these doctors were on the committee that specified the measures they would themselves be seeing.

The product was launched with great managerial acclaim, and promptly sunset 12 months later from disuse.


I dunno, but it sounds like we should get a beer sometime.

Not sure if this resonates also, but the engineers that took over all came from outside healthcare and had a strong "I'm going to apply what I know from Ticketmaster to solve healthcare!" mentality. Those of us that have 15 years of experience in healthcare would, at best, have a 10 minute "knowledge sharing" meeting with the engineering and product managers. And then we'd sit back and watch them make some really naive mistakes. [to be clear, I'm not about gatekeeping people from being involved in health tech, but rather I'm just exhausted at interacting with people with no self-awareness about the amount of things they don't know about a particular domain]

I'm still a bit bummed because I think we were actually just starting to get to some really cool, actually innovative, population health approaches that seemed effective for both improving outcomes and minimizing provider burnout. :(


Curious if you have any examples of "Non-dashboard approaches" to compare and contrast?


We tried a couple of different approaches: tableau reports, emailing static data to providers, sending spreadsheets of patient-level data, and building a Facebook or Twitter style feed. And then had different variations on each, and would run trials comparing different approaches.

We pretty quickly found that sending data ("push") was way more effective at engagement than just having a tableau report they could go to ("pull"), even when that dashboard was linked directly within the EHR, didn't require a login, and was contextualized for the provider (basically as low friction as you could get-- they would actually venture into it 1-2 times per year).

We ran a trial where we changed how we presented data: either in terms of number of patients ("screen 20 people for depression this week") or in terms of quality rates ("your depression screening rate is 40% and going up"). Keeping the data in terms of patients led to ~20% improved screening, and in the surveys led to providers expressing more trust in the data (although, they also were more likely to say they "didn't have enough data to do [their] job", despite actually doing their job better than the other group).

So then we took that idea for depression screening and extended it from depression screening to chronic disease management (where the specific task for the provider is much more variable). So we had one arm where we gave them access to data marts and trained them on how to "slice and dice" the data, and then compared that against a newsfeed that had the data pre-"sliced and diced". The engagement was higher in the newsfeed group. Interestingly, the only thing the "slice and dice" group seemed to do was look for patients without a primary care doc designed in the EHR and just assign them-- in evaluating the outcomes for this, that was the single least effective intervention they could do to improve chronic disease care (and this was validated in a follow-up study looking explicitly at the impact of PCP-assignment on patient care). So, our "newsfeed" arm ended up with, on average, around 60% better outcomes than the "slice and dice" arm.

What's funny is that through all of this, some of the leaders would also say "we need more data!!" But when we'd build a tableau report for them, they'd use it once or twice and then never again. Or, in one case, the leader would actually use it for ineffective purposes ("we have to assign them all PCPs!!") or for things that are easily automated ("we're going to email each of our uncontrolled hypertensive patients"). I firmly believe that for doctors and data, you need to have clearly defined objectives: the goal should never be "give them access to data", but rather should be something like "make providers feel like they have the data necessary to do their job" and "improve quality rates through targeted provider intervention." Start from those first principles, and validate your assumptions at each step. I'm confident your solution won't end up with tableau.


Thanks for writing this up, it's a valuable resource that can be applied to other industries.

My takeaways:

- Start with first principles, such as "improve quality rates through targeted provider intervention"

- Push and simple stats works better vs. pull with fancy dashboards

- Slice and dice can help identify process exception but not great for process improvement, whereas simple stats on a regular basis improve outcomes


Thank you so much! I'm working on improving construction management with better access to data and I think these insights will transfer very well to my domain.


Do you think there's an alternate approach to showing data that would be effective? I can definitely buy the idea that showing a bunch of fancy charts doesn't help most medical professionals, but I don't like the idea of giving up on trying to surface more data. Or is that what you're referring to as "non-dashboard approaches"?


Not the OP, but in my experience, when clinicians ask for more data, they're actually asking for more lineage or provenance metadata.

We boiled it down for our teams like this:

Administrators are top-down: they want a high-level view and to be able to drill down from there.

Individual physicians are bottom-up: they want "their" data about "their" patients, and maybe, sometimes, to compare to the peers they personally trust.

As with any professional group, there's some minority percentage that treats their work like a craft and knows how to use data to improve their practices; but the majority want qualitative data and value interpersonal relationships. Giving a dashboard to the latter group at all is wasting time and effort of all parties.

If your dashboard can't attribute all of its data and all of the patients referenced to match the physician's definition of "theirs," you've lost. That's the "more data" and "drill down" physicians care about.

If your dashboard isn't timely and clinical -- which generally means presented in a clinical voice, at the point of care, or otherwise when they have an opportunity to make the change you want them to make -- it's not going to be actionable. That means surfacing some alternative action right before they see a patient which might benefit from that, which is not when they're on their computer. They might be one of those doctors that never is on their computer until the very end of the day. Looking at your dashboard at 11pm about the patients from earlier today (or more likely, earlier this past quarter of the year) is not helpful.

Looking at your dashboard is non-clinical work, and doctors want to do clinical work. If you're going to make them go do a new and non-clinical thing, it has to reduce some other non-clinical thing in a way that's meaningful to them. Otherwise, they're just as likely to do an end-run around your application entirely, like the doctors who only use index cards to write notes or who fail to enter their passwords every morning and lock themselves out, so they don't have to use the EMR.


I wrote a longer response to another comment with examples of some of the experiments and learnings. But, yes, I think there are effective alternatives, but I think it starts with being really clear on what your success measures are. Do you want to maximize patient outcomes? Do you want providers to feel engaged (or, perhaps, actually engage) with data? Do you want to minimize provider burnout? I was always surprised by how few clinical and tech leaders could actually articulate what the goals were-- it'd often just be "we need providers to have more data!", which I suspect isn't actually what your goal is.

The tldr was that telling providers directly what you want, generally in an emailed newsfeed-style format was the most effective at improving actual outcomes. No slicing and dicing. No graphs. No comparisons. Just "hey, look at these 6 uncontrolled hypertensive patients, and follow-up with any that need follow-up."

Also, to caveat: I'm talking about how to engage the worker-bee providers. Not clinical leadership. Not the quality team. Not the data science/analyst team. Providers who are super busy with patient care, but also expected to manage patients between visits. Basically every experiment we ran favored the most direct, no frills, least effort approach to look at data. Which, coincidentally, was the exact opposite of what the engineering teams wanted to build :-/


With respect to the tldr - so, on one arm, so to speak, is an interface where clinicians could use different combinations of measures to identify patients who might need some kind of intervention, like follow up attention. Then the other side is an analytic system that uses a specific set of measures, etc, and then messages clinicians with a recommended set of actions?

In the first case, the clinicians have to do analytical work (slice, dice) towards understanding the population of patients. That sounds more like epidemiology... In the latter case, how is it that clinicians will trust the recommender? Is it understood that there is a clinical rationale or authority behind the algorithm? It sounds like "uncontrolled" in this case is based on a measure that clinicians trust.

I think of dashboards as potentially good for monitoring outcomes against expectations, EDA as potentially good for focusing attention on subpopulations, and recommenders as potentially good for efficiently allocating action. In a broad way what you described is a monitoring system that pushes recommended actions out to doers. I'd venture that with busy clinicians that that needs to be pretty accurate, too, and/or that recommendations need both explicit justification and a link to collateral information.


Quality measures are generally well-defined by external authorities, so questions like "what defines uncontrolled" are generally answered. Even when providers personally disagree with this (I worked with a provider who didn't believe in pre-diabetes), they still acknowledge that health care organizations are being judged on these measures, and that the measures are not just arbitrarily defined. How you improve your quality measures becomes where the question turns.

Your comment about epidemiology/EDA/etc really hit the nail on the head. If you sit in on population health meetings at your average hospital/clinic system, you'll see that many people really don't get this. Further, people often conflate their needs/desires with that of others-- so, the data-driven administrator is quick to say "we just need doctors to be able to slice and dice their data, and then we'll have better quality scores." But they're talking about what their needs are, and it's completely not what the doctors actually need (well, and from monitoring usage of dashboards for those types, I'd argue it's also not what they need either, but that's a different issue). And, the reason I keep saying "slice and dice" is because I've heard that phrase used by every vendor I've evaluated, and in practically every strategy meeting regarding population health at multiple institutions.

I'd personally shy away from describing this issue in terms of a recommender, since that has a pretty connotation in the ML world, and it doesn't really line up well (e.g., there's not a well-defined objective function or a clear feedback loop to train a recommendation system on). However, getting away from that specific concept, I think it's reasonable to say that there are needs for multiple distinct but ideally-related systems in the population health world: one for analysis to be used by quality and data people, and one specifically for the clinicians doing the work.


Sounds interesting, is any of that body of research available publicly?


Some, not all. I'm hesitant to directly post links on here since it will out both me and the company I worked for. If you're interested in this kind of work, you should check out the conference proceedings for AMIA (amia.org)


There’s so many vanity metrics in business related data science that I suspect when the visualization gets blamed instead of what is being visualized.


Can you provide concrete examples of data that would be shown in such dashboards? I'm having trouble visualizing it.


The dashboards would show generally quality measure performance, often outcomes but sometimes process measures. So, things like "percent of eligible patients who have had cervical cancer screening" or "percent of hypertensive patients who have controlled blood pressure" (outcome measure), and perhaps "percent of patients missing source documentation" or "percent of patients with no appointment scheduled" (process measures). The initial tableau report would let you drill down to see which patients were non-compliant for that particular quality measure, presumably allowing them to follow-up as needed.

By the time I left, we had a newsfeed that would just tell them what needed to happen ("you have N patients who have uncontrolled hypertension with no follow-up appointment booked. Consider messaging them to book an appointment."). No rates, just that story in their feed, with a link to the list of patients that corresponds to the "N patients". That would all be thrown into an email and sent weekly, and we'd track when they opened, clicked, and what ended up happening with the individual patients.


That's interesting. In my country that's usually handled by clinic administration and secretaries. I assume the presence of private medical information such as presence of hypertension, controlled and uncontrolled, etc. makes it necessary for doctors to do it.

In my experience, when patients have uncontrolled chronic disease a follow up appointment is automatically set up. Doctors prescribe medication and ask the patient to return after some time so they can be reevaluated. When they miss appointments, clinic administration knows and can follow up.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: