Hacker News new | past | comments | ask | show | jobs | submit login

"Medical Device Hacking" is a large part of how my wife and I slept at night for the last few years, based on the open source Nightscout [1] project which interfaced my kids' Dexcom CGMs to our smartphones. Continuous Glucose Monitors allow T1Ds to track their blood sugar without finger sticks and are particularly useful at night for catching lows.

The Nightscout project was a strong motivator for Dexcom to push harder on developing and improving their remote monitoring solution. A solution which could easily have languished in R&D and FDA approval hell for another 5-10 years got to market much more quickly once users were out there DIY'ing their own remote monitors.

Additionally there are a lot of "mis-features" that are coded into T1D products, presumably foisted on companies by the FDA to avoid liability, but which ultimately are a mild form of torture on end-users day in and day out. Having an option to turn to open source alternatives when the incessant alarms which can't be disabled are driving you mad is better than throwing out the device altogether. For those who haven't used or know anyone who has a pump, pod, or CGM -- the problem is that the devices are subject to dozens of variables which impact their performance, reliability, or accuracy, and alarms are often "lacking context" (to put it mildly) or outright wrong.

E.g. When you've treated a low and double checked with a finger stick that blood sugar is rising, a siren going off every 5 minutes from the CGM at 4am is enough to want to remove the CGM and smash it with a hammer.

Having open source alternatives is a large part of, I believe, forcing Dexcom and even the FDA "to the table" to reconsider hard-coded patient hostile "features". It's easier to appease the lawyers and go into CYA mode, when there's isn't a strong open source competitor with 28,000 Facebook followers and a Github repo with 21,000 forks [2].

[1] - http://www.nightscout.info/

[2] - https://github.com/nightscout/cgm-remote-monitor/




"Medical Device Hacking" is a large part of how my wife and I slept at night for the last few years

I spend a lot of time in places with little to no cellular coverage. One of the medical devices I use downloads prescription changes from the doctor's office via cellular connection. So I had to hack my device so I can make the prescription changes myself.

You do what you have to do.


> ...and a Github repo with 21,000 forks [2].

[2] - https://github.com/nightscout/cgm-remote-monitor/

People need to remember that if/when a GH repo is nuked from orbit, it also nukes ALL forks done in GH.

If you download, and then re-upload into your account, then it's not a "fork" per se, and won't disappear if/when the banhammer comes down.

Then again, this is the problem of centralization of a decentralized protocol! You are putting people whom dont have your interests at heart in control.


A large percentage of Dexcom developers have had their lives impacted by Diabetes in some way. This and plain old competition are the biggest drivers in their features and usability in my experience. You have to keep in mind like most businesses they have a long feature pipeline in development that is not public knowledge. If you release a feature and see it mirrored by someone else shortly after it's as likely that you both thought of it, and you just beat them to it than it is that they were inspired by you. Especially when one of you has a much shorter time to market thanks to no FDA oversight.


I’m sure the Dexcom developers are good people. And just look at their stock price — things are going well for them since they released the G6, which, well, actually works most of the time!

Their software is still doing trivial things badly and they deserve to feel ashamed and embarrassed that Nightscout could relay an integer value over the Internet to a smartphone years sooner than they could.

I’m very thankful for Dexcom. They’ve done great things for T1D management. They could do much, much better. They are only scratching the surface of what a CGM system should do, never mind a closed-loop system of which they will be an integral part, and dare I say, should be in the drivers seat to a dumb-pump following their orders.


> Especially when one of you has a much shorter time to market thanks to no FDA oversight.

They're fundamentally different markets, though. The device manufacturer is out to turn a profit on hardware, the open source programmer is out to improve their own individual experience.

Why should the second individual be subject to FDA oversight? I mean, I'm glad the FDA exists, but their function is to regulate the overall market -- not make it harder for me to make my _own_ healthcare decisions.


> Why should the second individual be subject to FDA oversight? I mean, I'm glad the FDA exists, but their function is to regulate the overall market -- not make it harder for me to make my _own_ healthcare decisions.

Regulatory capture from the ADA..

Yes, the FDA does make it harder/impossible to treat yourself. You are an idiot as far as they are concerned. And the doctor is the holy grail of decisions. And even how they come to a decision is 'holy' knowledge.

I should be able to go down to a drug store and buy most drugs (ideally all, but another story) and administer them to myself. I should be able to treat myself. But all that's locked away behind one of the biggest paywalls we have in this country.


> Yes, the FDA does make it harder/impossible to treat yourself. You are an idiot as far as they are concerned.

FWIW, the FDA doesn't even think about the end-consumer. They're the government's labelling-standards body. They just care that:

1. if you sell a product labelled 'X' (e.g. "milk", or "ibuprofen"), then it should contain only the ingredients—and the concentrations of such—listed in The Big FDA Book of What Products Labelled 'X' Contain. (This covers both the "no salmonella" cases, the "you can't call Cheez Whiz 'cheese'" cases, and the "beef withdrawn from the market for containing more iodine than beef usually contains" cases.)

2. if you make up a new product 'X'—really, a new product label 'X'—then "FDA approval" just means convincing them to add a page to their Big Book. You have to write that page: you must claim all the expected effects of consuming an 'X', and exhaustively list all potential side-effects of consuming an 'X'. Then, you must submit evidence that proves to their satisfaction that your reference-product for the label 'X' has all of those effects you listed; and that it has no other side-effects than the ones you listed.

Food manufacturers usually just have to deal with #1. Drug manufacturers have to deal with #2 and then #1. (Or just #1 if they're making existing drugs.)

The FDA was, for most of its life, just about #1: enforcing product integrity. They also did #2, but #2 wasn't a big deal—getting approval from the FDA for a new drug wasn't supposed to be hard or even expensive, as long as your drug really did something. It could even have disastrous side-effects. If you thought it was still marketable despite those, then you could just tell the FDA about them and they'll approve it. (See: all chemo drugs.) Just give the FDA a proven-accurate page for their Big Book, and they're happy.

But then the pages of the FDA's Big Book began to get taken as truth by various other standards bodies, that regulate what can or cannot be sold (sold at all, or sold over-the-counter, etc.)

And, because of that, manufacturers wanted their page in the Big Book to list great effects and few side-effects. Because then the barriers between them and their market are lower.

And what this means, is that that manufacturers started lying to the FDA, submitting a page describing what they wish the product was like, rather than what it is like. That's the only reason the FDA ever "does not approve" (note: not "rejects", just "does not approve") of a new label—the manufacturer can't prove their claims. I.e., the label isn't true.

Thus began the adversarial and expensive relationship between modern pharma companies and the FDA: the pharma companies want to make everything OTC and want to make a million claims about what each drug does; and the FDA just stands there, shakes its head, and tells them to come back with numbers proving their claims. And then the pharma companies burn through millions/billions of dollars trying to "prove" things that they know aren't true. Until they either eventually fail and just register a realistic monograph; or, rarely, they succeed (due to p-hacking) and end up with a drug that now is being taken by all the wrong people for all the wrong reasons.

Hate the ADA, or the DEA, or any number of other groups that use the FDA's Big Book, but don't hate the FDA themselves. All they're doing is taking a set of words (like "milk" or "ibuprofen"), defining them precisely, and then requiring companies that use those words in their marketing, stick to the definitions those words have in their Big Book.


really appreciate your comment. want to learn more about this:

...presumably foisted on companies by the FDA to avoid liability...

because I had thought the foisting ran in the other direction. I had been under the impression that the companies want to avoid liability so they nudge the FDA into creating a regulation that supports what they wanted to do anyway. not sure.


As a medical device sw developer, if doing your risk analysis, you notice that if for some reason the user miss something important, it will create a risk for the patient, the easiest (and cheap and lazy) solution is often to add a blaring alarm or scary pop-up to ask the user to confirm/check.

This way you can say that you have a mitigation, so the FDA is happy, you are happy because you can sell your product.

The downside is that it creates a horrible user experience most of the time.

This is unfortunate but this is a result of incentives of the different actors.

Another slightly related issue is that, you'd rather restrict what you user can do, because you're sure it will be at least safe. So often doctors can be frustrated because the system is not permissive enough. Most of the time, as a vendor, you prefer to sell a clumsy system that facing the risk of having a recall.


> to add a blaring alarm or scary pop-up to ask the user to confirm/check ... The downside is that it creates a horrible user experience most of the time

yes. at a medical data conference I once heard a doctor say that 70 - 80% of the alarms in the ICU where she worked were routinely ignored


Agreed. I had the G4 but eventually quit because it was sounding false alarms frequently. The later versions are more accurate but I question whether I would participate if their programs get more closed off. Also don't want to use a closed source mobile app without knowing what personal data might be getting sent out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: