Hacker News new | past | comments | ask | show | jobs | submit login
Can I drop a pacemaker 0day? (erratasec.com)
299 points by jessaustin on June 5, 2014 | hide | past | web | favorite | 163 comments

Call up CNN and offer to demonstrate how BIOTRONIC is so evil that they refuse to fix their pacemakers. Hook it up to an ECG and use your phone to make it flatline. Then turn to the camera and tell the audience, "because BIOTRONIC doesn't want to pay to fix their product, I can now kill your grandmother just by walking past her on the street."

Watch how long it takes them to fix it then, and watch how reactive they become to responsible disclosure next time.

Also, short their stock before you go on TV. A little something for your troubles.

Terrible advice to pull a media stunt.

First, you have no idea what the manufacturer needs to do to fix the problem, alert customers, do recalls and recertifications, and the like.

Second, you put yourself directly in the line of fire unnecessarily and for all the wrong reasons. You could find yourself on the end of all kinds of legal trouble, and on top of that you would be morally culpable for any harm.

Do it the right way: get a lawyer. The lawyer will know how to contact the vendors, the regulatory agencies, media if necessary, and customers if necessary.

> get a lawyer

Because this is the world we should want to live in? Where you must pay a member of the protection racket to mediate publishing knowledge of someone else's extreme wrongdoing?

That is terrible advice. Its road ends with TORified disclosures of weaponized automated exploits, because as pure info sec has shown, that's the only way the message ever gets across when you give people the insulation to not listen.

Publicly demonstrating these exploits to an amicable media is the best idea I've heard yet, as they have straightforward real-world effects that can be easily illustrated. If certain manufacturers choose to send goons after you rather than fix their buggy products, then the community-accepted custom for them can change to psuedonymous press releases accompanied by a video with a (mock) live human subject.

> Because this is the world we should want to live in? Where you must pay a member of the protection racket to mediate publishing knowledge of someone else's extreme wrongdoing?

Might be useful to distinguish between the ideal and the actual: in an ideal world, you of course shouldn't need a lawyer and the manufacturers should smilingly thank anyone who discovers an exploit and tells them. In this less-than-perfect world I'd suggest getting a lawyer and then going to the media.

Maybe if we just wish harder...

Interesting how digital rights, when they apply to privacy and outrage about eavesdropping it's "just the USA", but when they refer to liability in an out-of-control justice system it's as if does work the same way in the entire world.

People say that because a lawyer can really, really help you in your cause. The law is byzantine and someone who can navigate it and act as an advisor could make a big difference.

Claiming a medical company's life-saving device will kill your family on national news will almost without a doubt land you on the receiving end of a libel lawsuit, warranted or not. Not having to use lawyers would be nice, but it's not happening in this paradigm. This is a naïve response.

That's what anti-SLAPP laws are for.

Can you elaborate on how lawyers are "member[s] of the protection racket"?

The oft-recommended prudence of having a lawyer's advice for most any action in the public realm indicates a de facto protection racket.

Specifically, the above comment references having a lawyer handle (and moderate) what should be open technical communication with the manufacturer and regulatory agencies, the implication being that simply disclosing facts put you at grave risk from an endlessly complex legal system.

This situation isn't the same as a protection racket. In a protection racket, it's the racketeers themselves that hurt you when you don't pay.

In this case, lawyers are more like mercenaries. Yes, you can pay them for protection, as you can a racketeer. The differences are that they don't come to you demanding money, and if you don't pay they won't turn around and hurt you, nor will anybody they're directly working with.

Some other lawyers may cause you grief; however, they will be working on behalf of some other party, not the lawyers you didn't hire.

You could argue that the legal system as a whole is a racket, but that's a different sense of the word.

1) You really shouldn't have an open conversation about knowledge that can easily kill people.

2) I'm pretty sure that communication isn't the problem, the problem is that he want's to pressure them into fixing their mess, and that is exactly the point where things get messy from a legal perspective. I can hardly imagine a legal system in which a situation like this would be unproblematic.

"an open conversation about knowledge that can easily kill people"

This phrase and the article contain the same fallacy ( ("disclosing 0days when they can kill people"). I may be accused of semantic quibbling here, but I think it is important to state the issues clearly and accurately.

Information cannot kill anyone, nor exert any effects at all, ever. It is not causal. Actions using the information may be enabled by knowledge of the information, but they are human choices and not automatic.

This is not merely a matter of careless expression that does not affect the argument. In fact the fallacy is not only, or not exactly supposing that knowledge is causal, but rather in eliding the whole articulation of what happens between the revealing or acquisition of knowledge and the action that may or may not use it in some way.

The situation has a common element with the gun control issue: if someone has a gun, violence is easier, and this may be considered bad, but it does not excuse conflating the shooter's action with someone else's conduct of merely allowing that person to have a gun. It does not shift any responsibility from a competent adult actor to someone who merely allows a gun to be available.

Note also that the gun-possessor, or the person newly armed with knowledge, need not act on it at all, and those who confuse things by missing these distinctions manage to avoid the fallacy in those cases.

> 1) You really shouldn't have an open conversation about knowledge that can easily kill people.

You mean like guns, toxins, and martial arts?

sigh. This will get boring quite fast because a sizable portion of people participating in threads like that find the idea revolting that actions can have, you know, consequences, but what the heck...

If you would find a recipe for a toxin that is deadly, untraceable and can be mixed together from common household items by a talented 14 year old, it's probably a bad fucking idea to post that to 4chan. The same goes for hypothetical weapon blue prints or martial arts techniques that would allow to kill with a microscopic risk.

>If you would find a recipe for a toxin that is deadly, untraceable and can be mixed together from common household items by a talented 14 year old, it's probably a bad fucking idea to post that to 4chan.

Does this count as a straw man argument? Wouldn't the actual scenario would be more like disseminating the information that a deadly toxin that is deadly, untraceable, and can be mixed together from common ingredients exists, not the recipe itself.

Well, that depends on whether he dropped the actual exploit, or just talked about it. Which was the original question.

This is an extensively amended and much weaker claim than your first.

Don't forget vehicles, diseases, sudden deceleration, falls, drops and darwin award winning behaviour.

It's important that people be aware of the risks they face, and seeking to silence that conversation is not helpful. If anything, it will create an environment where any perpetators might go unpunished because it's just implausible they did what they did.

>1) You really shouldn't have an open conversation about >knowledge that can easily kill people. I can easily kill someone with a rock. Just hit in the head, repeatedly. Why that should be a secret?

Like estate agents and recruiters, they create an inefficiency and exploit it.

I was about to ask something similar. Ever wonder how other people view us engineers or white hat hackers? Misunderstood by outsiders? Perhaps this is applicable to the legal profession as well.

It's right there on the form they sign in blood at their initiation ritual. I forget whether they use their own blood, or the blood of the sacrificial goat though.

I think OP is suggesting reveal the effect, but don't reveal the cause. That's what makes the suggestion different from releasing an 0day

If it's as easy as using 'strings', then isn't that no different than releasing a 0day?

If it's that simple he or she has, in effect, already released it.

I wonder if a lawyer really caring about his client wouldn't straight up say "just forget about it".

If the company you're criticising doesn't like critics, doesn't care about bad PR, and has to feed their lawyers, you won't have a good time any way you do it.

Some rare occasions you might be better off with a strong public opinion supporting you and people coming out of the bush to help your case, than trying to do it the sneaky way and still get caught in a hell of legal troubles that not much people really heard of, and you get cast as the little guy trying too pull money from the big corp because of the narrative sold to the media by your opponent.

And who pays the lawyer?

I expect that any savvy lawyer would be happy to take this pro bono. Think of how much free publicity they'd get for their practice.

Ah, excellent plan. That's a variation of a good old "you should write us an app / design a logo / sketch a website for free, just for all the exposure you get".

I see a lot of people on the Internet implying that there are plenty of lawyers just sitting around waiting for interesting and potentially high profile cases to take on for free. I've never seen any evidence of this, and I suspect there are very few.


    Major pro bono matters, or smaller cases with great
    human interest, are far more likely to receive extensive
    coverage. Holland & Knight, for example, received highly
    favorable and extensive coverage of its work in the
    Rosewood case, including a glowing front page, above the
    fold, article in the Wall Street Journal and People.

    Hogan & Hartson, similarly, received a great deal of
    play in the media concerning its representation of
    African-American plaintiffs alleging that Denny’s
    restaurants had discriminated against them. In both
    instances, the firms undertook these time-consuming,
    controversial cases because it was the right thing to
    do. However, their creative, successful lawyering
    became a front-page story.

> you would be morally culpable for any harm

Guns don't kill people...

I'd imagine the correct thing to do is use the FDA's "MedWatch Online Voluntary Reporting Form".

Use the MedWatch form to report adverse events that you observe or suspect for human medical products, including serious drug side effects, product use errors, product quality problems, and therapeutic failures for:... Medical devices (including in vitro diagnostic products)

[1] https://www.accessdata.fda.gov/scripts/medwatch/

That strikes me as a system that was set up with the best intentions, but which in practice acts as a barrier to, rather than facilitator of, change. The only way I can see it compelling a manufacturer to change is if someone is killed by this exploit and their family uses the existence of a report of that vulnerability to push for massive willful negligence damages in a lawsuit.

That strikes me as a system that was set up with the best intentions, but which in practice acts as a barrier to, rather than facilitator of, change


I can see an argument that is is ineffective (maybe it just drops into the FDA bureaucracy), but I'm not sure how you can argue it is actually a barrier?

The FDA does actually have some pretty significant regulatory authority over medical devices. IANAL, but it appears this may be one of the limited cases where the FDA may actually be able to force a recall[1] as opposed to just requesting it. I suspect (and hope) the "recall" would actually be limited to a software update in this case. Even if they can't force it, a FDA-requested recall is a pretty significant thing.

[1] http://www.fda.gov/downloads/AboutFDA/Transparency/PublicDis...

Watch how long it takes them to fix it then...

That strikes me as really optimistic. Another scenario:

Medical equipment manufacturing lobby (I'm assuming there is such a thing) pushes to have such disclosures treated as acts of terrorism. Manufacturer issues a patch that fixes your very specific vulnerability in some trivial, meaningless way. Your career is ruined. Pacemakers truly secured: 0.

If you attempt a stunt like this, please first consider the people who have these pacemakers in their bodies. Put yourself in their shoes as they're watching the news broadcast or getting a frantic call from a family member.

Let's say one of them panics out of fear, has a heart attack, and dies. The family reports this to the media. Now the news media is hunting you down. The authorities want to have a word with you, and you're the target of several lawsuits. Not to mention, you just killed someone with your flippant remarks. Technically speaking, the device manufacturer hasn't hurt anyone at this point. But you've contributed to the death of a person. Is that really what you're after?

Contacting a lawyer to understand the protocol for disclosure and the ramifications won't cost you anything for the initial consultation. Contact the EFF or ACLU and ask for advice. Ask them who you should contact next.

> Technically speaking, the device manufacturer hasn't hurt anyone at this point. But you've contributed to the death of a person. Is that really what you're after?

I agree that this could happen, but the obvious argument is that technically the device manufacturer did just kill their customers by 1) selling them a defective product; and 2) failing to take the opportunity to fix it when notified.

> If you attempt a stunt like this, please first consider the people who have these pacemakers in their bodies. Put yourself in their shoes as they're watching the news broadcast or getting a frantic call from a family member.

The whole idea is to handle this knowledge in the way that leads to the least hardship/pain/death. It's quite possible that a "stunt" like this is the best way, especially considering that there will likely be other less virtuous people making this discovery on their own soon.

Now THAT is how you do it. Grab a "shock of the week" angle and play it for anyone that wants to watch. Only issue is getting a "non-defective" pacemaker. Those things aren't cheap or easy to come by without ordering it from the manufacturer. Whatever profit you could have shorting the stock you'd lose almost immediately by having to purchase the devise.

I was a Funeral Director in a previous life. I would remove pacemakers if the deceased was to be cremated. We had bags of them. They are cheap and disposable, don't believe anyone that tells you otherwise. Go ask your local Funeral Director for one.

And you're in tech now? That's an interesting career transition. You might write it up sometime; I love stories like that.

It's a tale of a misspent youth, wasted 20's, and a procrastinating disaffected attempt to reclaim my life as I now begin my 30's.

I echo this sentiment. Would love to hear about your path to tech.

I'm sure this is true, but all of the manufacturers provide postage paid biohazard boxes for the explanted devices to be returned to the manufacturer and properly disposed of. All you have to do is call the customer service number and ask for some to be sent to you.

How did ErrataSec find (or verify) this vulnerability if they didn't have access to the pacemaker?

I'm assuming no pacemaker owners were harmed in the finding of this vulnerability.

It seems to me this article is a fantasy. A what if scenario. I don't think this guy really knows pacemakers at all frankly. A bluetooth enabled pacemaker seems far fetched and to my knowledge such a thing does not exist. Pacemakers require the reader device to be in contact with the patient directly over the pacemaker, the range is that limited.

> Call up CNN and offer to demonstrate how BIOTRONIC is so evil that they refuse to fix their pacemakers

We should probably check first to see if BIOTRONIC is one of their advertisers first, might be an issue.

There's no need to call CNN. We have Youtube now, which would arguably be a more effective medium if the video can achieve any level of virality.

I'd guess the demographic that cares about this does not go on YouTube enough to make it more effective.

Doesn't matter, CNN et al will rebroadcast once it's gone viral.

Is there a demographic left that doesn't go on YouTube?

I think you underestimate the power of mailing lists among the general public

This might sound pedantic, but don't say "I can now kill your grandmother," it makes you look evil instead of the company you're trying to shame.

BIOTRONIC releasing the patch they should have released anyway, just to stop your evil scheme of murdering the elderly, turns into a PR win for them.

Everything except shorting their stock. If you wrote a hard-hitting exposé or a John Stossel-type broadcast you aren't likely to be branded a terrorist. Make sure you don't reveal how it's actually done, but the fact that it can be.

In short - media showing the potential results (and dramatizing them) puts heat on the company to fix it.

Also, be prepared to be labeled a murderous hacker.

Then they cut to an ad of Watch_Dogs.

No need to involve the media. Call the police, and make an anonymous death threat towards a well-known person with a pacemaker. They will have to investigate. Then send them your POC anonymously.

I'd suggest contacting Andrea Peterson at the Washington Post, and letting her know you have a follow up story for her piece on how Dick Cheneys had the wireless on his pacemaker disabled[1].

[1] http://www.washingtonpost.com/blogs/the-switch/wp/2013/10/21...

But first have some skepticism. You can't interface with a PM without direct contact with a patient. Pacemakers are not bluetooth enabled wifi connected internet appliances. Yes, they are programmed remotely, but in this case remote means a reader device that must be physically placed on a person directly over the pacemaker. While there are things to be legitmately concerned about, this article is a wild fantasy.

"I can now kill your grandmother" - very bad phrasing for TV...

isn't that what weev did? he sure got arrested and prosecuted for it

That will land him in prison. Have you learned nothing?! Think AT&T - and that's just frigging phone info. For this, they'll tar him a terrorist and throw him in a hole to die.

Don't you need surgery to fix it?

If a pacemaker can be remotely exploited, it can probably be remotely patched as well. Once you have remote root, anything is possible.

It's probably easier to crash than root.

I understand that this isn't the purview of yellow journalism, but to be accurate, the statement should be: "I can now kill your grandmother just by walking past her on the street, and asking her to stand still while I hold an induction wand quite close to her pacemaker"

You got me thinking, so I looked up the protocols used. It's apparently not nearly that complex or limited:



No, original commenter is correct. You'd need contact with your victim, you have to hold a reader device directly over the pacemaker. Really you may as well stab them.

I'd like to know how hard (or easy) it would be to fool a PM into causing and unsafe fast heartbeat. I would imagine there are safeguards at a low level against such a fault. The easiest thing for a PM hacker to do, assuming they have close physical contact with the victim, would be to shut off the pacemaker and in most cases that would not do much more than cause dizziness or fainting.

This is the most important problem that the internet of things faces. How can we network everything while maintaining at least some scrap of security, especially in the long term? How can we convince people that their toaster is worth patching, and, more importantly, how to we convince vendors that toasters are worth releasing patches for? What if appliance makers go bankrupt and your dishwasher no longer receives patches? How will devices be updated if another Heartbleed-esque situation occurs? It's easier for a user to protect themselves from a 0-day in an app they use, for example, compared to vital home appliances such as dishwashers, refrigerators or washing machines, which cannot merely be uninstalled.

This is a very real threat, most notably Belkin [0] has suffered critical security breaches, and this issue won't be going away any time soon. How can security researchers get CVE's patched, and how can we prevent them from occuring in the first place? This should be priority #1 for any company trying to bring internet-connected appliances to the mainstream.

[0]: http://arstechnica.com/security/2014/02/password-leak-in-wem...

> How can we network everything while maintaining at least some scrap of security, especially in the long term?

Another question that we should be asking more is should we network everything that could be?

As for a pacemaker, personally I think the answer is a definite NO. It has only one function, to keep someone alive, and any extra functionality only represents an increased risk of malfunction. If there is any firmware in it then that firmware should be as simple as it can be. Preferably open-source and subject to being reviewed/corrected by many, before it gets permanently embedded in a device.

> how can we prevent them from occuring in the first place?

The obvious way is by doing it right the first time. Sadly, this is something that seems to have fallen out of fashion, as the prevalent mentality is more like "we can always issue an update, so it doesn't matter that much". A dangerous mentality indeed, when it's in truly safety-critical applications. Companies are increasingly pushing for "smartness" in their products, espousing all the ostensible advantages, while not giving much exposure to the possible downsides too.

I imagine there's some value in being able to update the firmware on a pacemaker. Maybe a new pacemaking algorithm can save 1% more lives or something. Or it could automatically call an ambulance when you have a heart attack, etc.

Implanted medical devices do seem like the ideal situation for wireless access, albeit you probably don't want to overburden the thing with features either.

Does this problem (the growing widespread network insecurity of everyday objects) have a specific name, like "security rot"? If it does, I don't know it. I do know that once you give a complex problem a label (like "net neutrality" or "the Internet of Things"), it becomes a catalyst for discussion. People begin to understand and recognize the label, it becomes a brand that journals and conferences and books and blogs can all focus on. This phenomenon needs a label if we're going to make real progress on it.

The networking of "things" is not a problem as long as you can opt out of it. Can you stop the toaster of the future from talking to the vendor, the crock pot, Google, the neighbors, your router?

Policy-wise we need a requirement of opt-in: the manufacturer can try to convince you that connecting the device to internet is beneficial for you, but has to let you say no.

And on the technical side, if it needs to be authorized in your router, you already have an opt-in. If it's going to connect by default somehow, maybe by open mesh wireless or somesuch, that's a problem for privacy and security.

Implantables, and particularly life-essential ones like pacemakers, are different. They need remote access to enable updating without surgery, but it must be secured well enough to prevent the sort of vulnerabilty the article describes.

BTW, if you were intent on killing someone, wouldn't it be just as effective to direct a strong RF signal to burn out the electronics, overwhelming any access controls?

Yeah you can't trust Belkin's WeMo line for light switches, and yet they are coming out with integration with things like crock posts and humidifiers soon. This is very scary.

Absolutely NOT because this could kill people.

If you truly have a pacemaker 0day, contact me (joelparkerhenderson) on most major service and I will connect you with my healthcare policy lawyer. She can rapidly open the doors to the vendors who have the risk.

Do most medical device manufacturers carry insurance against lawsuits? If so, historically, how high has the bar been before the insurers pay out? If there is a strong relationship between a device manufacturer getting sued and an insurer losing money then this could be a great contact to try.

Yes, but more importantly, medical device makers have broad immunity when their devices go through the PMA process (the most stringent type of FDA approval). Basically, the argument is "hey, the FDA said it was safe".

Then this is where the pressure needs to be applied - at the certification process. It needs to be made a legal requirement to attain certification (if not already), and the certifiers need to follow best practices for vulnerability detection. And it needs to be an ongoing, open process.

Yes, the FDA certification should include something along the lines of "Manufacturer has an ongoing process to evaluate new vulnerabilities and push updates to affected individuals."

It does, at least for new approvals post ~2012. Doesn't help existing devices in the field though.

The question isn't whether it could kill people, but whether it would kill fewer people than not releasing the exploit.

(Though even if you were sure fewer people would die it would still be an ethical conundrum)

And how exactly does he determine whether that is the case?

Here's an idea:

1.) Responsible disclosure to vendor. Allow reasonable amount of time for a fix to be created and deployed.

2.) (If fix is deployed, release details)

3.) If no fix is deployed in a reasonable amount of time and the vendor is unresponsive, release a PoC that demonstrates exploitability without giving away details. eg: "Here is a pacemaker. Look, I did magic and it stopped!" This is the same idea as releasing the actual vulnerability/exploit, but doesn't put lives at risk. People that could fuzz for any type of a vulnerability would be able to find it on their own anyway.

I agree that ICS and health-sensitive vulnerability disclosure is a trickier field than most. Medical devices, cars, and power plants are much more sensitive than a random kid's iPhone; that's why groups like I Am The Cavalry are trying to address the issue industry-wide.

However, to answer the original question: don't drop a pacemaker 0day at DEF CON. Find a way to fix the problem with the vendor instead. At the very "worst," demo without vulnerability or exploit details.

What does 'fix deployed' mean? How do you actually update pacemaker software? Are you going to wait for 100% of the deployed pacemakers are fixed? What is an acceptable fix rate before you release the exploit?

Pacemaker firmware can almost always be updated using inductive or rf telemetry. In most cases it still requires an appointment with a cardiologist or similar physician though.

Anyone who uses a pacemaker will need to have it checked at least a couple of times a year anyway.

And many can now be monitored by the cardiologist from their office as the device uploads data to a server, or can even be reached directly from the physician's console. And both cardiologists and pacemaker companies generally have a pretty good bead on who's walking around with which serial numbered device.

I think you should assume people will reverse engineer patches as soon as they are public. People should treat this like any urgent medical care and address it within hours or days of the patch being available. I don't know much about that area of the industry, but I don't envy it at all. Imagining being the person responsible for a) enabling remote communication, b) allowing updates via remote communication, and c) securing it. What a nightmare situation.

If a large enough majority of people get their pacemakers fixed, it greatly lowers the chances that you'll encounter someone with a defective pacemaker that you can exploit.

This will stop at #1 after vendor sends you DMCA and informs feds that you are planning to kill people by hacking their product.

- Contact the FDA, or other regulatory bodies.

- Contact the customers. They'll likely have standing to sue (they were sold a defective product).

- Class-action attorneys may be interested for this reason.

- Did you know you can pay a very, very modest amount of money to file a press release saying anything you want?

- Contact some investors. Short sellers will have a vested interest in making sure the information gets widely publicised.

I agree that the FDA would be a good place to start. I only know people on the Drug side, so I am not sure who to have you talk to about devices.

I would start with Josh Simms who is in charge of Cardio Devices in the Division of Manufacturing and Quality at 301-796-5540, or maybe someone in the Office of Device Evaluation. Mark Feliman at 301-796-5630 is in charge of Cardiac Electrophysiology Devices, but I think he works more on the approval side.

The appropriate contact info for all of CDRH can be found here: http://www.fda.gov/AboutFDA/CentersOffices/OfficeofMedicalPr...

I believe this comes closest to the proper answer. Eventually, we need technology literate courts, and a law that criminalizes failure to fix disclosed vulnerabilities. Until that day, the process you describe best achieves the same result.

I agree, and -- generally speaking -- OEMs take FDA compliance for Class III medical devices very seriously. If there is a regulation violation that could put patients at risk, the FDA would not hesitate to shut down production and force a fix, whether it's hardware or software.

(Speaking from the perspective of an IT guy at an electronics manufacturing company who is well versed in CFR 11 and medical validation.)

What is legality regarding the investors shorting?

Since it's on HN you could technically argue that it is now publicly known. If you short the stock past this point you may well be in the clear.

There's no requirement for information to be "publicly known" to trade on it. You have to be an "insider" for it to be insider trading - ie, there has to be some relationship of trust (eg an employee or officer, or in some circumstances people they "tip off").

It's not insider trading; you have no relationship with the company that establishes a responsibility to keep their secrets (might be a different story if you, eg, paid an insider for the information). For that matter the device as it's handed to the public is public information. It's no different than discovering your hamburger is poisoned and shorting McDonald's.

This sort of 0-day has been known in the academic literature for some time[1].

Disclosure of critical vulnerabilities in implantable devices is far more fraught than your normal critical software 0-day. These devices require surgery for replacement, and a small number of those surgeries will have possibly fatal complications. The cost of immediately replacing all existing vulnerable devices could literally be measured in lives. (And that's even assuming that the device manufacturer fixed the problem!)

Implantable software is already a very tricky area, and there's no signs that it'll get any easier.

[1] Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses, http://www.secure-medicine.org/public/publications/icd-study...

It's not quite "this could kill people" but rather "this could be used to kill someone." But there are a lot of things that one person could use against another person to kill them, ethically, what does adding this thing to the list change?

If you discovered / disclosed a particular way the unit could malfunction and kill someone it seems like that's put in a different class; in that case you're a hero saving lives. But if you report on a technique someone could use to cause the device to malfunction, it's treated completely differently.

I think a related and important message is that pacemaker "malfunctions" should be treated as possibly suspicious.

This is a way to kill someone at a distance, with no obvious trace leading to you, and using nothing but an off the shelf laptop or phone. It's significantly more dangerous than any of the other known methods of murder because of the reduced risk to the murderer.

My understanding is that pacemakers, insulin pumps, and such have only limited short-range wireless capability. It's not exactly a 3G connection with a public facing IP address.

As long as you still need proximity and individual targeting, I think it's not a paradigm shift in murder.

The short range is to do with the antenna in the device which is (obviously) limited to a certain size.

A larger aerial on the attacking device will allow for communication over a greater range.

The paradigm shift is when you can sit 6 rows back at a baseball stadium and take out someone or walk through their subway car and kill them.

There is no trace evidence and done in a crowd at rush hour essentially no chance of getting caught, heart attacks happen all the damn time.

How about releasing the vulnerability in stages? The author jumps from unresponsive vendor to releasing exploit code. What if you add steps between the two?

For example:

- Announcing a vulnerability has been found and identifying the unresponsive vendor.

- Announcing what the disclosure timeline will be.

- Detailing the product lines known to be affected by the vulnerability.

- Publishing communication with the vendor so far with any details about the vulnerability redacted.

- Private disclosure to professionals (doctors & journalists) to have them independently verify that the vulnerability exists and help with raising awareness.

- Full details about the vulnerability, but no exploit code.

This just sounds like responsible disclosure to me. With added steps because the "responsible" part requires you act differently due to the possible risk involved. This is likely the best way to go, and I'd expect to see some legal advice back it up were it to actually happen such an exploit existed.

What pacemaker communicates via blue tooth? Last I checked they all used induction telemetry (which requires the telemetry wand to be within several inches of the device) or MICS band radio for distance telemetry. I think some Boston Scientific devices used 900MHz at one time, but how many of those are still in the wild?

The only instances of "hacking" a pacemaker (or ICD) have been when researchers used a programmer from the manufacturer to "hack" the device.

So it seems super unlikely you know a blue tooth zero day for a pacer.

>The only instances of "hacking" a pacemaker [...] //

Someone has linked a PDF of an "ICD study" upthread that shows your contention to be at least partially false.

I assume you mean this one:


You would need to be specific about which one of the linked documents you meant, there were several. All the hacking attempts started off with a manufacturer's programmer and worked back from there. In the example using a software radio, the researchers were able to replay sniffed commands to the device after it had been activated by the programmer.

Which of the reports talked about a device being compromised without using a manufacturers programmer?

Yes. I've not pored over it but they said:

>"We implemented several active [replay] attacks using [only] the USRP and a BasicTX daughterboard to transmit on the 175 kHz band." //

Yes, they used a programmer for reverse engineering purposes but from my - admittedly brief - look at the paper it seemed they performed active attacks (page 8(A) onwards) without using the programmer.

So they previously used a programmer but the attacks were performed without one. Assumed true it seems a reasonable PoC that contradicts the essence of your statement which seemed to say all "hacks" needed a manufacturers programmer to perform.

Indeed, all pacers I know of don't have Bluetooth comms precisely because of the potential for vulnerabilities.

(I used to work for a pacer company)

> So let's say

What do you expect them to do? Even assuming they were 100% concerned with security and did everything right and there was still a bug that allowed a pacemaker to be compromised. Do you expect them to cut open a person and replace the buggy pacemaker?

I don't pretend to be an expert in this area but getting medical equipment approved is a huge undertaking and I don't know what the ramifications of changing anything would be. Say they take your 0day and fix it. Then they have to go through the entire re-certification process again and after however many months or years, NEW patients get the fixed pacemaker. But what about all the old patients?

While I sympathize, the only realistic approach here is to make the consequences for killing someone via a 0day for the "lulz" so drastic that it would certainly legally bleed over into the disclosure. I realize this is the approach we do tend to take here in the US.

Someone made a comment above stating that people with pacemakers typically have to go in once or twice a year to get it checked, and the devices can be updated using 'inductive or rf telemetry'. Presumably doctors could update the devices when patients come in.

In the case of medical devices, this is squarely in the FDA's wheelhouse in the USA. The FDA likely lacks the people with appropriate expertise to evaluate these kinds of safety issues because their traditional focus has been on the more typical kinds of medical device risk. A concerted effort at dialog with them could turn that around. Particularly if it were done through a series of academic workshops with key people.

Make a YouTube video of the hack actually working on a pacemaker (preferably one that is not in a person). Show how it can be executed from a smart phone while walking down the street or sitting at Starbucks.

Send that to the company and the media. You are best off also showing documentation that you told the offending company multiple times.

Show don't tell.

"The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem."

If this is the case, then wouldn't the same most-people (if made aware of the issue) also agree that it should be illegal for a company's management to ignore life-threatening software flaws in their products after being notified?

I mean illegal as in reckless endangerment or manslaughter, not illegal as in lawsuits and golden parachutes.

And so it begins. I was wondering when we'd finally start seeing the InfoSec guys get to this. The more recent stuff branching into CAN on cars and before that SCADA systems seemed to be the last sort of stepping stone from a traditional PC network to the internet of things networks.

I'm sort of glad, in a twisted way, that this has finally happened. Better the light get cast on this now than in a few years once the criminal(/nation-state...) equivalents have had time to go through it themselves.

I remember reading that Cheney had them remove all wireless functionality from his pacemaker because they were afraid of the potential of someone using it for assassination.[1]


EDIT: Also, no you shouldn't release a pacemaker 0day. As others have said, expose it without releasing details. Makes for a nice demo.

I think that there are a lot of ways to approach this. The Heartbleed disclosure was very well done and has a lot of lessons, perhaps there's something to learn from that.

Personally, I think it's completely unacceptable the way many technologies critical to keeping people alive are so vulnerable. Especially if the vulnerabilities are as widespread as the article suggests (30%!), find a list of 10-20 that vary in importance. List all the products, and list the consequences of each vulnerability.

Then start dropping 0-days one at a time until the industry realizes you are serious. Start with the less severe ones, but if the pacemaker vulnerability hasn't been addressed after a few months of weekly vulnerability releases, don't hold back. The more publicity you can get the more likely a company is to patch vulnerabilities.

If _teenagers_ are capable finding vulnerabilities that can end lives using a script they downloaded online, then we need to be ready to take drastic action. The industry is in a terrible state and we aren't safe, and decreasingly so as these gaping holes continue to sit there and be discovered.

This is an incredibly sensational piece. All of the sane suggestions are dismissed as "doesn't work" by giving one example where it didn't work. It's not that easy - going to the media won't solve the problem 100% of the time but it sure as hell would if it were a life and death 0day and wasn't fixed with urgency.

Don't even get me started about the Nazi analogy...

If it’s an obvious vulnerability, is there value in withholding the details? There is a strong case to be made for the argument that the people who would be willing to use such a 0day maliciously (sociopaths) would find it anyway.

It could potentially also depend on how easily the vulnerability can be patched—one that can be patched remotely can be dealt with much more rapidly than one that will require surgery to replace the device. If one assumes that full disclosure will lead to the fixing of the issue, the first class is probably closer to being judged “responsible” than the second.

It is certainly a difficult dilemma. The correct answer can only be known with the benefit of hindsight…

When there is a flaw in a car seat or child's toy everyone flips shit and recalls start happening. It's covered on the local news and all that. Why doesn't that happen for pace makers? Isn't this a problem for the Consumer Product Safety Commission or the FDA?

It's a product that has a flaw. Seems like it qualifies for a public recall.

I don't understand why there is such a debate here. I would absolutely disclose the 0-day if the manufacturer was unresponsive (given sufficient warning, of course). Moreover, if anyone died, I wouldn't feel the least bit guilty about that - the guilt rests firmly on the manufacturer and the individuals who choose to use the exploit.

After all, black-market exploits will come, and people will die, whether you disclose the vulnerability or not. At least with disclosure, the innocent have a chance to protect themselves.

You must weigh the lives lost to silence against the lives lost to disclosure. We practice disclosure in all other areas of computer security because we have seen the cost of silence too many times. There is no reason it should be different here.

Disclosure saves lives.

This is clearly something where a regulatory agency that can and will apply penalties to manufacturers that do not fix bugs that are physically possible to fix without ill effect in a timely manner is appropriate. You simply report to that agency.

These problems are serious enough that failing fast and hard is not a good way to go about it. This is software meets physical reality. In software land, we've developed radically different approaches to engineering problems because of the incredibly cheap costs. This is one place where we have to borrow from other disciplines that have more experience with safety issues.

Solution I find is relatively simple from a moral standpoint. Exhaust all options. Instead of saying 'ah it won't work', just do it anyway, contact anyone and everyone , privately related to this, about this and do everything you possibly can before press exposure. With the press you can demonstrate but not release the actual steps to how it's done. If it is still denied as fake or ignored despite all of that, release it if you feel the moral obligation to do so. 'You must do what you think is right, of course'.

Of course, don't publicize it.

But consider a though experiment: From a security perspective, does it really increase risk? There are many ways to kill someone that are much simpler than figuring out what pacemaker they have, finding a 0day, designing an attack and implementing it. This vulnerability doesn't necessarily increase the risk that someone will be murdered.

EDIT: Or if you want to attack the pacemaker, use radiation from microwaves or similar devices. At least according to signs posted in many places, they are dangerous to users of pacemakers.

There seems to be this idea embedded in human minds that there is a group of people who are just waiting to kill someone, and the fact that they have not yet found a way of doing it is the only thing preventing them from doing so. As soon as you show them that you can do it by hacking a pacemaker, they will go ahead and do it.

There is no shortage of methods of killing someone or inflicting bodily harm. As far as moral culpability, showing how a 0day exploit can be used to kill a person is akin to saying that you can use arsenic to kill grandma.

relevant: "It’s Insanely Easy to Hack Hospital Equipment" (wired.com)

[article]: http://www.wired.com/2014/04/hospital-equipment-vulnerable/

[hn thread]: https://news.ycombinator.com/item?id=7684291

Of course you can. Executing this type of attack would be more technically complex then executing a plain old murder, with say a simple gun.

But also significantly less traceable back to the murderer.

Notify the manufacturer's products liability insurer through a respectable and concerned PI lawyer. Action will be swift.

Ideally, there would be some way to disclose this kind of info to whatever government regulatory agency is responsible for approving these things. In your situation, you have to worry that whoever you disclose this to is going to act irrationally, and governments exist to mediate between irrational actors for the benefit of the public.

Wasn't Barnaby Jack going to unveil a Pace Maker 0day last year before he [mysteriously] died?

He didn't mysteriously die. There are several people on HN that knew him personally, and there's been a lot of creepy not-even-wrong speculation about what happened to him. None of his friends appreciate it.

I was referring to the timing of things. But yes, I don't disagree. He did some very brilliant work.

Drop the 0day. I'd say Anonymously, but not necessarily. If its an obvious bug, this isn't a moral issue at all. Had it been a multi stage, complicated exploit (hard to find and implement), I would suggest otherwise. In this case its a no brainer IMHO.


Michael Hastings and Barnaby Jack were not the same person. They weren't in the same industry; they weren't even from the same hemisphere. To my knowledge, there were zero connections between the two.

The death of Barnaby Jack (who was to present at Defcon) was tragic, but not suspicious.

@fiatmoney's comment seems like very good advice. Here's a link since it's very far below the fold:


I think that unless you want to carry the burden of possible assassinations that you could have stopped, you have to disclose the vulnerability ASAP. In order to avoid legal vendettas, I recommend to do it anonymously.

Pacemakers have a remote exploit bug ? If you can get this close to my heart, you might as well stick a knife in it. The only difference is the one will look like a failed pacemaker and the other will look like murder.

I suppose Heartbreak Bug would be an awfully appropriate name for one.

Write a script for CSI or one of these police procedural shows.

Warn the CDC of an impending health issue and include very specific details or the at risk group and the transmission vector.

Anonymously do what you think is right.

It would be interesting if someone created a payload that patches exploited device.

The ONLY people who need to know about this is the manufacturer and their regulatory body FDA, etc.

The company can then pull all the inventory that is in and out of the patients and apply fixes or facilitate replacements.

If this could kill people, I'd hope the above ideas would be obvious...but well I know they won't be to everyone.

If I had this product implanted in me and it was known for a substantial amount of time that said product was 0dayed, you better believe I have a right to know.

Correct, it's not obvious to me. If there is a flare in my medical device, I want you to tell me about it...

It's amazing to me that you think I'm not entitled to know there's a problem. Do you also believe I shouldn't be informed if the lock on my door doesn't work? You must hate those consumer watchdog shows!

If someone is capable and willing to kill Mr with a pacemaker bug, they likely already have the skills to find the bug. Not telling me just withholds the opportunity for me to protect myself (stay indoors? Contact doctor, etc).

You should definitely be informed there is a serious issue. Whether you (and everyone else) should be given instructions how to duplicate that issue is a different matter.

Is there any secure way to update a pacemaker's software without surgery?

Contact the surgeons who implant them and demonstrate to them.

was really sketchy when it happened on homeland


Where is the landing page and scary logo ?

You can play moral white hat hacker when you are disclosing vulns in some shitty Chinese router, or participating in one of bug bounty programs, but NOT when you play against big boys (AT&T, MBTA, Juniper, Adobe, etc). You will get crushed, jailed and humiliated.

Disclose fully, but anonymously.

I call BS.

How does an ECM (or anti-lock brake controller) "slam on the brakes"?

All cars produced for US sale after 2011 have mandatory traction control systems that work by modulating the brakes on a corner by corner basis. It's not inconceivable that this could be hacked.

I would snap insta do it for the fame and future business opportunities alone.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact