Watch how long it takes them to fix it then, and watch how reactive they become to responsible disclosure next time.
Also, short their stock before you go on TV. A little something for your troubles.
First, you have no idea what the manufacturer needs to do to fix the problem, alert customers, do recalls and recertifications, and the like.
Second, you put yourself directly in the line of fire unnecessarily and for all the wrong reasons. You could find yourself on the end of all kinds of legal trouble, and on top of that you would be morally culpable for any harm.
Do it the right way: get a lawyer. The lawyer will know how to contact the vendors, the regulatory agencies, media if necessary, and customers if necessary.
Because this is the world we should want to live in? Where you must pay a member of the protection racket to mediate publishing knowledge of someone else's extreme wrongdoing?
That is terrible advice. Its road ends with TORified disclosures of weaponized automated exploits, because as pure info sec has shown, that's the only way the message ever gets across when you give people the insulation to not listen.
Publicly demonstrating these exploits to an amicable media is the best idea I've heard yet, as they have straightforward real-world effects that can be easily illustrated. If certain manufacturers choose to send goons after you rather than fix their buggy products, then the community-accepted custom for them can change to psuedonymous press releases accompanied by a video with a (mock) live human subject.
Might be useful to distinguish between the ideal and the actual: in an ideal world, you of course shouldn't need a lawyer and the manufacturers should smilingly thank anyone who discovers an exploit and tells them. In this less-than-perfect world I'd suggest getting a lawyer and then going to the media.
Specifically, the above comment references having a lawyer handle (and moderate) what should be open technical communication with the manufacturer and regulatory agencies, the implication being that simply disclosing facts put you at grave risk from an endlessly complex legal system.
In this case, lawyers are more like mercenaries. Yes, you can pay them for protection, as you can a racketeer. The differences are that they don't come to you demanding money, and if you don't pay they won't turn around and hurt you, nor will anybody they're directly working with.
Some other lawyers may cause you grief; however, they will be working on behalf of some other party, not the lawyers you didn't hire.
You could argue that the legal system as a whole is a racket, but that's a different sense of the word.
2) I'm pretty sure that communication isn't the problem, the problem is that he want's to pressure them into fixing their mess, and that is exactly the point where things get messy from a legal perspective. I can hardly imagine a legal system in which a situation like this would be unproblematic.
This phrase and the article contain the same fallacy ( ("disclosing 0days when they can kill people"). I may be accused of semantic quibbling here, but I think it is important to state the issues clearly and accurately.
Information cannot kill anyone, nor exert any effects at all, ever. It is not causal. Actions using the information may be enabled by knowledge of the information, but they are human choices and not automatic.
This is not merely a matter of careless expression that does not affect the argument. In fact the fallacy is not only, or not exactly supposing that knowledge is causal, but rather in eliding the whole articulation of what happens between the revealing or acquisition of knowledge and the action that may or may not use it in some way.
The situation has a common element with the gun control issue: if someone has a gun, violence is easier, and this may be considered bad, but it does not excuse conflating the shooter's action with someone else's conduct of merely allowing that person to have a gun. It does not shift any responsibility from a competent adult actor to someone who merely allows a gun to be available.
Note also that the gun-possessor, or the person newly armed with knowledge, need not act on it at all, and those who confuse things by missing these distinctions manage to avoid the fallacy in those cases.
You mean like guns, toxins, and martial arts?
If you would find a recipe for a toxin that is deadly, untraceable and can be mixed together from common household items by a talented 14 year old, it's probably a bad fucking idea to post that to 4chan. The same goes for hypothetical weapon blue prints or martial arts techniques that would allow to kill with a microscopic risk.
Does this count as a straw man argument? Wouldn't the actual scenario would be more like disseminating the information that a deadly toxin that is deadly, untraceable, and can be mixed together from common ingredients exists, not the recipe itself.
It's important that people be aware of the risks they face, and seeking to silence that conversation is not helpful. If anything, it will create an environment where any perpetators might go unpunished because it's just implausible they did what they did.
If the company you're criticising doesn't like critics, doesn't care about bad PR, and has to feed their lawyers, you won't have a good time any way you do it.
Some rare occasions you might be better off with a strong public opinion supporting you and people coming out of the bush to help your case, than trying to do it the sneaky way and still get caught in a hell of legal troubles that not much people really heard of, and you get cast as the little guy trying too pull money from the big corp because of the narrative sold to the media by your opponent.
Major pro bono matters, or smaller cases with great
human interest, are far more likely to receive extensive
coverage. Holland & Knight, for example, received highly
favorable and extensive coverage of its work in the
Rosewood case, including a glowing front page, above the
fold, article in the Wall Street Journal and People.
Hogan & Hartson, similarly, received a great deal of
play in the media concerning its representation of
African-American plaintiffs alleging that Denny’s
restaurants had discriminated against them. In both
instances, the firms undertook these time-consuming,
controversial cases because it was the right thing to
do. However, their creative, successful lawyering
became a front-page story.
Guns don't kill people...
Use the MedWatch form to report adverse events that you observe or suspect for human medical products, including serious drug side effects, product use errors, product quality problems, and therapeutic failures for:... Medical devices (including in vitro diagnostic products)
I can see an argument that is is ineffective (maybe it just drops into the FDA bureaucracy), but I'm not sure how you can argue it is actually a barrier?
The FDA does actually have some pretty significant regulatory authority over medical devices. IANAL, but it appears this may be one of the limited cases where the FDA may actually be able to force a recall as opposed to just requesting it. I suspect (and hope) the "recall" would actually be limited to a software update in this case. Even if they can't force it, a FDA-requested recall is a pretty significant thing.
That strikes me as really optimistic. Another scenario:
Medical equipment manufacturing lobby (I'm assuming there is such a thing) pushes to have such disclosures treated as acts of terrorism. Manufacturer issues a patch that fixes your very specific vulnerability in some trivial, meaningless way. Your career is ruined. Pacemakers truly secured: 0.
Yes, there is a lobby.
Let's say one of them panics out of fear, has a heart attack, and dies. The family reports this to the media. Now the news media is hunting you down. The authorities want to have a word with you, and you're the target of several lawsuits. Not to mention, you just killed someone with your flippant remarks. Technically speaking, the device manufacturer hasn't hurt anyone at this point. But you've contributed to the death of a person. Is that really what you're after?
Contacting a lawyer to understand the protocol for disclosure and the ramifications won't cost you anything for the initial consultation. Contact the EFF or ACLU and ask for advice. Ask them who you should contact next.
I agree that this could happen, but the obvious argument is that technically the device manufacturer did just kill their customers by 1) selling them a defective product; and 2) failing to take the opportunity to fix it when notified.
The whole idea is to handle this knowledge in the way that leads to the least hardship/pain/death. It's quite possible that a "stunt" like this is the best way, especially considering that there will likely be other less virtuous people making this discovery on their own soon.
I'm assuming no pacemaker owners were harmed in the finding of this vulnerability.
We should probably check first to see if BIOTRONIC is one of their advertisers first, might be an issue.
BIOTRONIC releasing the patch they should have released anyway, just to stop your evil scheme of murdering the elderly, turns into a PR win for them.
In short - media showing the potential results (and dramatizing them) puts heat on the company to fix it.
This is a very real threat, most notably Belkin  has suffered critical security breaches, and this issue won't be going away any time soon. How can security researchers get CVE's patched, and how can we prevent them from occuring in the first place? This should be priority #1 for any company trying to bring internet-connected appliances to the mainstream.
Another question that we should be asking more is should we network everything that could be?
As for a pacemaker, personally I think the answer is a definite NO. It has only one function, to keep someone alive, and any extra functionality only represents an increased risk of malfunction. If there is any firmware in it then that firmware should be as simple as it can be. Preferably open-source and subject to being reviewed/corrected by many, before it gets permanently embedded in a device.
> how can we prevent them from occuring in the first place?
The obvious way is by doing it right the first time. Sadly, this is something that seems to have fallen out of fashion, as the prevalent mentality is more like "we can always issue an update, so it doesn't matter that much". A dangerous mentality indeed, when it's in truly safety-critical applications. Companies are increasingly pushing for "smartness" in their products, espousing all the ostensible advantages, while not giving much exposure to the possible downsides too.
Policy-wise we need a requirement of opt-in: the manufacturer can try to convince you that connecting the device to internet is beneficial for you, but has to let you say no.
And on the technical side, if it needs to be authorized in your router, you already have an opt-in. If it's going to connect by default somehow, maybe by open mesh wireless or somesuch, that's a problem for privacy and security.
Implantables, and particularly life-essential ones like pacemakers, are different. They need remote access to enable updating without surgery, but it must be secured well enough to prevent the sort of vulnerabilty the article describes.
BTW, if you were intent on killing someone, wouldn't it be just as effective to direct a strong RF signal to burn out the electronics, overwhelming any access controls?
If you truly have a pacemaker 0day, contact me (joelparkerhenderson) on most major service and I will connect you with my healthcare policy lawyer. She can rapidly open the doors to the vendors who have the risk.
(Though even if you were sure fewer people would die it would still be an ethical conundrum)
1.) Responsible disclosure to vendor. Allow reasonable amount of time for a fix to be created and deployed.
2.) (If fix is deployed, release details)
3.) If no fix is deployed in a reasonable amount of time and the vendor is unresponsive, release a PoC that demonstrates exploitability without giving away details. eg: "Here is a pacemaker. Look, I did magic and it stopped!" This is the same idea as releasing the actual vulnerability/exploit, but doesn't put lives at risk. People that could fuzz for any type of a vulnerability would be able to find it on their own anyway.
I agree that ICS and health-sensitive vulnerability disclosure is a trickier field than most. Medical devices, cars, and power plants are much more sensitive than a random kid's iPhone; that's why groups like I Am The Cavalry are trying to address the issue industry-wide.
However, to answer the original question: don't drop a pacemaker 0day at DEF CON. Find a way to fix the problem with the vendor instead. At the very "worst," demo without vulnerability or exploit details.
- Contact the customers. They'll likely have standing to sue (they were sold a defective product).
- Class-action attorneys may be interested for this reason.
- Did you know you can pay a very, very modest amount of money to file a press release saying anything you want?
- Contact some investors. Short sellers will have a vested interest in making sure the information gets widely publicised.
I would start with Josh Simms who is in charge of Cardio Devices in the Division of Manufacturing and Quality at 301-796-5540, or maybe someone in the Office of Device Evaluation. Mark Feliman at 301-796-5630 is in charge of Cardiac Electrophysiology Devices, but I think he works more on the approval side.
The appropriate contact info for all of CDRH can be found here: http://www.fda.gov/AboutFDA/CentersOffices/OfficeofMedicalPr...
(Speaking from the perspective of an IT guy at an electronics manufacturing company who is well versed in CFR 11 and medical validation.)
Disclosure of critical vulnerabilities in implantable devices is far more fraught than your normal critical software 0-day. These devices require surgery for replacement, and a small number of those surgeries will have possibly fatal complications. The cost of immediately replacing all existing vulnerable devices could literally be measured in lives. (And that's even assuming that the device manufacturer fixed the problem!)
Implantable software is already a very tricky area, and there's no signs that it'll get any easier.
 Pacemakers and Implantable Cardiac Defibrillators:
Software Radio Attacks and Zero-Power Defenses, http://www.secure-medicine.org/public/publications/icd-study...
If you discovered / disclosed a particular way the unit could malfunction and kill someone it seems like that's put in a different class; in that case you're a hero saving lives. But if you report on a technique someone could use to cause the device to malfunction, it's treated completely differently.
I think a related and important message is that pacemaker "malfunctions" should be treated as possibly suspicious.
As long as you still need proximity and individual targeting, I think it's not a paradigm shift in murder.
A larger aerial on the attacking device will allow for communication over a greater range.
The paradigm shift is when you can sit 6 rows back at a baseball stadium and take out someone or walk through their subway car and kill them.
There is no trace evidence and done in a crowd at rush hour essentially no chance of getting caught, heart attacks happen all the damn time.
- Announcing a vulnerability has been found and identifying the unresponsive vendor.
- Announcing what the disclosure timeline will be.
- Detailing the product lines known to be affected by the vulnerability.
- Publishing communication with the vendor so far with any details about the vulnerability redacted.
- Private disclosure to professionals (doctors & journalists) to have them independently verify that the vulnerability exists and help with raising awareness.
- Full details about the vulnerability, but no exploit code.
The only instances of "hacking" a pacemaker (or ICD) have been when researchers used a programmer from the manufacturer to "hack" the device.
So it seems super unlikely you know a blue tooth zero day for a pacer.
Someone has linked a PDF of an "ICD study" upthread that shows your contention to be at least partially false.
You would need to be specific about which one of the linked documents you meant, there were several. All the hacking attempts started off with a manufacturer's programmer and worked back from there. In the example using a software radio, the researchers were able to replay sniffed commands to the device after it had been activated by the programmer.
Which of the reports talked about a device being compromised without using a manufacturers programmer?
>"We implemented several active [replay] attacks using [only] the USRP and a BasicTX daughterboard to transmit on the 175 kHz band." //
Yes, they used a programmer for reverse engineering purposes but from my - admittedly brief - look at the paper it seemed they performed active attacks (page 8(A) onwards) without using the programmer.
So they previously used a programmer but the attacks were performed without one. Assumed true it seems a reasonable PoC that contradicts the essence of your statement which seemed to say all "hacks" needed a manufacturers programmer to perform.
(I used to work for a pacer company)
I don't pretend to be an expert in this area but getting medical equipment approved is a huge undertaking and I don't know what the ramifications of changing anything would be. Say they take your 0day and fix it. Then they have to go through the entire re-certification process again and after however many months or years, NEW patients get the fixed pacemaker. But what about all the old patients?
While I sympathize, the only realistic approach here is to make the consequences for killing someone via a 0day for the "lulz" so drastic that it would certainly legally bleed over into the disclosure. I realize this is the approach we do tend to take here in the US.
Send that to the company and the media. You are best off also showing documentation that you told the offending company multiple times.
Show don't tell.
If this is the case, then wouldn't the same most-people (if made aware of the issue) also agree that it should be illegal for a company's management to ignore life-threatening software flaws in their products after being notified?
I mean illegal as in reckless endangerment or manslaughter, not illegal as in lawsuits and golden parachutes.
I'm sort of glad, in a twisted way, that this has finally happened. Better the light get cast on this now than in a few years once the criminal(/nation-state...) equivalents have had time to go through it themselves.
EDIT: Also, no you shouldn't release a pacemaker 0day. As others have said, expose it without releasing details. Makes for a nice demo.
Personally, I think it's completely unacceptable the way many technologies critical to keeping people alive are so vulnerable. Especially if the vulnerabilities are as widespread as the article suggests (30%!), find a list of 10-20 that vary in importance. List all the products, and list the consequences of each vulnerability.
Then start dropping 0-days one at a time until the industry realizes you are serious. Start with the less severe ones, but if the pacemaker vulnerability hasn't been addressed after a few months of weekly vulnerability releases, don't hold back. The more publicity you can get the more likely a company is to patch vulnerabilities.
If _teenagers_ are capable finding vulnerabilities that can end lives using a script they downloaded online, then we need to be ready to take drastic action. The industry is in a terrible state and we aren't safe, and decreasingly so as these gaping holes continue to sit there and be discovered.
Don't even get me started about the Nazi analogy...
It could potentially also depend on how easily the vulnerability can be patched—one that can be patched remotely can be dealt with much more rapidly than one that will require surgery to replace the device.
If one assumes that full disclosure will lead to the fixing of the issue, the first class is probably closer to being judged “responsible” than the second.
It is certainly a difficult dilemma. The correct answer can only be known with the benefit of hindsight…
It's a product that has a flaw. Seems like it qualifies for a public recall.
After all, black-market exploits will come, and people will die, whether you disclose the vulnerability or not. At least with disclosure, the innocent have a chance to protect themselves.
You must weigh the lives lost to silence against the lives lost to disclosure. We practice disclosure in all other areas of computer security because we have seen the cost of silence too many times. There is no reason it should be different here.
Disclosure saves lives.
These problems are serious enough that failing fast and hard is not a good way to go about it. This is software meets physical reality. In software land, we've developed radically different approaches to engineering problems because of the incredibly cheap costs. This is one place where we have to borrow from other disciplines that have more experience with safety issues.
But consider a though experiment: From a security perspective, does it really increase risk? There are many ways to kill someone that are much simpler than figuring out what pacemaker they have, finding a 0day, designing an attack and implementing it. This vulnerability doesn't necessarily increase the risk that someone will be murdered.
EDIT: Or if you want to attack the pacemaker, use radiation from microwaves or similar devices. At least according to signs posted in many places, they are dangerous to users of pacemakers.
There is no shortage of methods of killing someone or inflicting bodily harm. As far as moral culpability, showing how a 0day exploit can be used to kill a person is akin to saying that you can use arsenic to kill grandma.
[hn thread]: https://news.ycombinator.com/item?id=7684291
The death of Barnaby Jack (who was to present at Defcon) was tragic, but not suspicious.
It would be interesting if someone created a payload that patches exploited device.
The company can then pull all the inventory that is in and out of the patients and apply fixes or facilitate replacements.
If this could kill people, I'd hope the above ideas would be obvious...but well I know they won't be to everyone.
It's amazing to me that you think I'm not entitled to know there's a problem. Do you also believe I shouldn't be informed if the lock on my door doesn't work? You must hate those consumer watchdog shows!
If someone is capable and willing to kill Mr with a pacemaker bug, they likely already have the skills to find the bug. Not telling me just withholds the opportunity for me to protect myself (stay indoors? Contact doctor, etc).
Disclose fully, but anonymously.
How does an ECM (or anti-lock brake controller) "slam on the brakes"?