On the very same day this information came out, 'Viceroy Research Group' managed to release a 33-page 'analysis' of these results. With illustrations.
>We believe AMD is worth $0.00 and will have no choice but to file for Chapter 11 (Bankruptcy) in order to effectively deal with the repercussions of recent discoveries.
Viceroy Research lists no employees or contact address, but it appears they are not a crack team of hardworking & incisive business analysts, but two Australian teenagers and a former UK child social worker, struck off in 2014 for misconduct.
They have previous form in producing or plugging short-call stories (quite effectively), and latterly investigated by South African media for similar shady business.
It took very little internet sleuthing to find this stuff out. None of the tech press bothered to do so.
Disclaimer: I have no position in AMD.
Edit: link to Viceroy https://viceroyresearch.org/
And that's the creation date, not even when they were published.
Edit: And it gets better! If you check the HTTP headers when requesting the whitepaper from their servers, it will tell you that the file was placed there (last-modified) at 13:22 GMT, so just 1 hour before Viceroy Research Group created their analysis - and probably ages before the actual news broke.
That said, unless the whole "research" is fake, I wonder if we could be seeing more such tactics in the future against tech companies, and whether or not that would give them an immense incentive to care about security - or risk getting ruined in the stock market.
Honestly, such a huge incentive may actually be needed to get most companies to get about security. The money equation needs to make sense to them. Right now most think investing the absolute minim amount in security for compliance reasons is already too much money wasted on security. If this were to become common, I think maximizing security would actually start looking quite profitable to them.
I mean, this research is already saying there are some backdoors in AMD's chips. I imagine in the future, companies would be way more careful about allowing backdoors in their products, whether intentionally or by mistake, if they knew they risked getting their stock crushed.
So yeah I just like to play with this idea a little bit. So far this revelation doesn't seem to have had the "desired" effect by the backers of the research, though, but we'll see. I just want to know whether or not the research is real, so I'll wait for AMD's confirmation. I assume AMD wouldn't try to lie to us about it, because there are now probably at least a dozen security teams trying to pick AMD's chips apart, so the flaws would be found soon enough, if real.
Not a sure-thing conviction, but certainly a dangerous business plan.
> If you think a company is bad, or fraudulent, you can sell its stock short and try to profit when everyone discovers its problems and the stock drops. If you want to hurry that process along, you can always noisily publish research reports explaining why the company is bad or fraudulent. If your research reports convince other investors of your thesis, then the stock will drop, and you will make money. There are more longs than shorts, and more dicey public companies than noisy short hedge funds, and so people who use this strategy tend not to be especially popular. In particular people often go around accusing them of fraud, or market manipulation. "Wait," people ask, "how is it not manipulation to short a stock and then publicly announce that the stock is bad?" I am always confused by this complaint. Just flip it around: It's not manipulation, surely, to own a stock and then publicly announce that the stock is good.
(Followed by further justification of this position).
On the other hand, it seems that uncovering new true information, and then taking a short position on it, not illegal.
A lot of the comments in this thread were assuming that there was some crime just from reporting the bad news and trading on it, unconditionally on whether or not the news was true.
If the claims you're making are true, then it's not deceiving or defrauding, even if the way the information was published was immoral under standard professional ethics.
If these vulnerabilities were misrepresented by the short sellers that funded it, then I suspect that would bring them into stock manipulation territory.
I can think of a number of approaches, but nothing I could catergorise as trivial.
Extremely fishy. 1-day notice? Such aggressive wording without even the chance for AMD to address the concerns?
If I was the tinfoil hat type I'd guess that Intel is trying to spread FUD but maybe it's just security researchers trying to generate a bit of buzz for their company at the expense of AMD.
>Yaron co-founded CTS-Labs in 2017, and previously served as an intelligence analyst in the Israeli Intelligence Corps Unit 8200. He is also the founder and Managing Director of NineWells Capital, a hedge fund that invests in public equities internationally. He holds a B.A. and M.A. from Yale University.
Could something like this be considered inside information? Or is it legal to actively manipulate stock prices to ones benefit in this way?
No, illegal insider trading refers to trading on inside information when you have a confidentiality agreement or a fiduciary duty. Information asymmetry is insufficient (or else it would be virtually impossible to profitably trade at all).
> Or is it legal to actively manipulate stock prices to ones benefit in this way?
The way you're presenting this is a false dichotomy. It's not "manipulating" stock prices except insofar as people broadcast news all the time which alters stock prices. Strictly speaking, it's not market manipulation if it's true. If it's false, it can be, which is why you really try not to do it unless it's true.
As far as I can see this is only an exploit of secure boot if you are already on ring 0 level auth. Making a whole webpage with lots of graphics and whatnot, sending press releases all over and in general present it like a security flaw on the level of meltdown seems .. false?
Probably court level material.. In any case it seems to have backfired as the stock is up.
Maybe not technically a short squeeze (https://en.m.wikipedia.org/wiki/Short_squeeze) but related?
It feels slimy and gross.
If the news pushing a stock price is so misleading that it's categorically different from the truth, then I could see a case for market manipulation being brought against them. But I doubt that will happen, because unfortunately people have broad latitude to portray vulnerabilities however they'd like as long as they're convincingly authentic.
I guess that's my question. If you take a true statement and put it out with connotations that it's actually a terribly thing and worse than it is, just being "true" doesn't matter. Stupid analogy, say you go to a bakery and buy bread, then it goes stale in two hours or so by the time you get home. Then you go and write a negative yelp review that this bakery is terrible at baking and they sell subpar bread. You just don't know that the bakery doesn't bake in preservatives that bread you buy from the supermarket does. And it certainly isn't like the bakery is selling spoiled/poisoned bread or selling rocks painted to look like bread loaves.
Seems a bit shady in any case if one were to do this in the same way as when companies pay researchers to make claims publicly that benefit the company like the process leading up to the banning of led in petrol etc.
It’s also not market manipulation to publish factual information or opinions. Only knowingly publishing false information would qualify.
For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte....
Without this incentive, why would anyone bother to reveal damaging info? You're placing your self as a target with no reward. The payment is the natural balance of the market.
So yes, this research firm is connected w a hedge fund, and they have a very vested interest. But that doesn't make their claim untrue
1. Flash the BIOS
2. Have admin access
Holy shit, this calls for a full fledged panic!
I am very disappointed anandtech.com even bothered to give this smear campaign the time of day. If someone can flash your BIOS or has admin access then you already have way bigger problems.
Anandtech is reporting on the situation more than the flaws. That does require covering what the flaws are though. Not covering it at all isn't exactly performing good journalism either.
There are, as I see it, two rational, coherent ways to be outraged about this story:
1. The vulnerabilities are fabricated and the report is fraudulent, in which case, by all means, slag the researchers.
2. The vulnerabilities are real, in which case. AMD is an 11 billion dollar company that got outmaneuvered by what appears to be 4 dudes in a basement.
I do not need to be a security researcher to understand that they, as with everyone else, have an obligation to the body politic to not be a dick (as in all things!). There are actors who may be aware of this attack already--but, as I mentioned elsethread, wider knowledge of attacks like this have a much higher chance of splashing back on end users who literally don't know any better than it does AMD. I mean, I couldn't give less of a shit about how AMD feels--they'll be fine regardless--but there are people downrange of this, not just some company.
This is shoot-the-hostages stuff, and I believe that you are better than to be OK with that.
This isn't "shoot the hostages". The researchers didn't manufacture the vulnerabilities; AMD did. If 4 dudes in a basement can find exploitable driver vulnerabilities, so can 10 researchers none of us will never have heard of working in a nondescript office somewhere in Bulgaria. The only moral differences is that these 4 dudes told us about what they found --- something else they had no actual obligation to do.
Again: it seems really likely that these vulnerabilities have been hyped way out of proportion to their real impact. I think it's reasonable to be irritated by that (again, though: this isn't a first). But other than that, I don't understand how people arrive at the conclusion that independent security researchers owe strangers the results of their work.
They've disseminated widely an attack strategy to people who didn't have it. Nobody except AMD can fix the problem, regardless of the good intentions of other actors--on the other hand, many bad actors can use that information. That's as shoot-the-hostages as it gets.
Security researchers owe "strangers" (which is a really weird term for "society at large" that I don't think you, specifically, would be using with such connotations outside of a security context where you'd already made a decision) the same courtesy they owe everyone else: to not endanger people unnecessarily. I agree with you that this is a relatively minor vulnerability, I'm not hyping it or anything--but it's still a vulnerability, it is still more widely known now, and there is a bigger pool of bad actors than there was last week able to use it against people, irrespective of AMD's stock price.
There's certainly a gray area, if a vendor hasn't acted to fix something you know they know about. I'm not talking about that. But 24 hours and briefing the media before letting AMD know, as it very much seems like they did, is well outside of what I could consider any reasonable gray area.
If you care about end users, and you should because they are your fellow people, you don't publicize how bad actors can hurt them. You just don't. It's just...minimal decency, to care about other people. I can't see it any other way.
The premise of your argument is that without vendor cooperation, end-users are helpless to mitigate the impact of security flaws.
No, they aren't. Not only are they not helpless, but many of them are in fact ethically obligated to mitigate exposures with or without the assistance of their vendors. Almost every end user has at least one last-resort mitigation for any vulnerability: the power switch.
Most of the time, most users have better non-patching mitigations than that. These vulnerabilities are all post-compromise privilege escalation flaws. Their exploitation is situational and most users can do things to eliminate the situation that enables their exploit.
You might not like the fact that end-users have to make hard, expensive choices about how to mitigate flaws. But if you think about it for just a second, you'll see that the idea that patches were saving them from this choice was fallacious. There is no reason to believe these 4 dudes were the only ones in the world capable of finding these flaws (the reality is that if they're the only ones who know about them, it's because the kinds of flaws they found simply aren't important enough to demand focused attention from others). All restricted disclosure does is prevent end users from making the choice for themselves.
I believe that as a general rule, we're better off when we have the most information available to us about vulnerabilities. Personally, I'd probably stop short of publishing exploit code. But other researchers that most of us respect a great deal in the abstract do not have that particular scruple, and some --- like the original Metasploit project --- made it a point to publish exploit code immediately, patch or no patch, to arm operators with information about their exposure.
This isn't an idle opinion. If there was working Usenet search in 2018, you could find me making approximately the same argument back in the 1990s, when I worked as a researcher at SNI, the world's first commercial vulnerability research lab.
I would say they are all invasive evil maid threat vectors. Each one requires either physical access to the hardware or (as you stated) an already established root privileges. We all know that if you have physical access to hardware, it's essentially game over.
However. One of the vulnerabilities supposedly allowed to subvert UEFI secure boot. If that's true and allows to boot arbitrary media, then the others are equally feasible, because an attacker can boot into a root shell of their choosing.
The timing in this disclosure reeks of malice, though. Giving a 24h advance warning basically allows the outfits to claim that they disclosed vulnerabilities to manufacturer before going public. Technically true. Just highly misleading and dishonest.
I have personally no beef with full disclosure, and have advocated it as a viable mechanism since the mid 1990's. I also happen to think that responsible disclosure is a good approach, but it definitely needs the threat of FD as a stick, because otherwise vendors would not have any real incentives to work on addressing security bugs. Name-and-shame does work.
Let's get back to AMD flaws. Giving a really short window? Basically just enough to have an initial PR response ready? Have the decency to go full disclosure. Or give a full month. AMD won't be fixing the bugs before news breaks in either case. Just don't claim this is anything but a maliciously crafted exercise with ulterior motives.
You seem like a living argument for ethical standards being imposed on your industry, by law if needed.
You're arguing that the force of law should prevent you from learning inconvenient things about the software you use.
You are harmed by them discovering a vulnerability and telling the world about it.
And if they discover a vulnerability and tell both you and the rest of the world, the harm may easily outweigh the benefit.
Suppose I go wandering around the city where you live, checking for unlocked house doors. I find that you've left your front door unlocked and gone on holiday. I then wander the streets shouting "Thomas's house is unlocked and no one's at home!". I also phone you up to let you know your house is unlocked.
It was your fault, not mine, that the house was unlocked and no one at home to deter burglars. In principle, anyone else could have come along and burgled your house, if they'd found it before I did. None the less, I think that in this scenario I have done you wrong.
The argument against your position that people are trying to get across to you is not that. It is that publication of vulnerability without giving heads-up and time to prepare solution to the vendor greatly increases the risk that a user will be harmed by attackers exploiting the public knowledge. Often substantial number of users are not going to mitigate or resolve the problem without their vendor giving out the official solution.
It's certainly reasonable to argue which kind of disclosure is the best way to achieve minimal harm, but my opinion is that it's unethical to disclose without considering what method of disclosure will do the least harm, or, worse, just not caring and going for the "biggest splash", as is what it seems these researchers did.
What does that even mean? What do you think "ethics" means? This is a nonsensical statement.
The consideration of what people in certain situations should or should not do, IS ethics.
Even if someone would say (for some reason) "but researchers should be able to do their work without consideration", that is making an ethical statement.
I understand why you would have a problem mapping this back to ethics, because if you'd formulate it as such, it would sound kind of bad: Researchers have no ethical responsibilities to the public.
You can't choose to not let decisions be guided by ethics, that's like claiming you choose to find your way without navigating. It makes no sense.
This being a controversial topic straight at the intersection of technology, the way it changed and affected society, the public good and our dependence on technology, I really don't think that "I haven't changed my mind about this in 28 years" supports your argument ...
And honestly I would say that whether I agree or not.
I wasn't working in security but I definitely moved my opinion on the matter. In the (late) 90s I was mostly for full public disclosure arguing the same "we're better off when we have the most information available to us". But today I'm leaning way more towards "responsible disclosure is good" (as you can tell I'm also not 100% black-and-white on the matter like you said you are).
Maybe it's because I was younger then and had more of a reckless mentality and an innocent belief that people will make the right choices given enough information.
Maybe it's because in the past 28 years technology has changed our society to such an extent that impact of security vulnerabilities is rather incomparable to the impact they had back then.
Maybe it's because I definitely don't believe that you can defend this opinion with the very same arguments that were used back then without even addressing the spread of information technology and the drastic way they altered society in the past 28 years.
Maybe it's because I now realise that I myself am not always better off with more information if I can't act on it, and therefore it's not reasonable to assume it as a general rule. Which is very much something I had yet to learn 28 years ago, had to swallow some pride. I wish everybody was a clever as I was back then ...
I know everyone in my family is ignorant of this “disclosed” security flaw and is powerless to mitigate the vulnerabilities disclosed on their own. Even if they did know to “turn off their computer” as someone said, are they supposed to wait until someone calls them to tell them a patch is ready?
Disclosing a vulnerability for profit at the expense of everyone else is a shitty thing to do. Would giving AMD a few days to fix it have hurt as many people as giving them one day?
Continued education to help end users get to the point where they can make meaningful and educated decisions is great, and should be pursued, and I do it where I can (though most of the time there's just a shrug and a "whatever"). But, barring that, somebody's gotta make choices on their behalf, and there's a Jerry Garcia quote for this one, you know? With great power comes great responsibility, and we gave ourselves that power. And, outside of a security context, this is why I unflinchingly come down on people who work for shit companies that hurt people, why I'd never hire someone who worked for, say, a toolbar vendor in the 90's/00's and why I have fired clients before when I discovered they were doing shitty things with data gleaned from people who trust them: because we have ethical responsibilities to the people downstream of us who are ill-equipped to make meaningful, educated decisions. I can't compel anyone to do as I do--but I can say that one should, because it's decent.
I can't agree that the power switch is a reasonable mitigation in 2018. In the nineties, sure, but too much of life revolves around this garbage we invented and keep mostly creaking along. (Should it? Probably not. Does it? Yeah.) We are on a ratchet, we can't go back, and kicking the decision down to people who literally-literally lack the tools to make a wise decision while painting a target on them for bad actors who can take advantage of them is profoundly disturbing to me.
This particular vulnerability is a post-compromise privilege escalation flaw, yes. But it strikes me that the conversation must be bigger than that, because the same arguments are used for both. This? Low stakes. Heartbleed? Incalculably high stakes. But the same argument could/would (if it were found by shitheads rather than people with a certain amount of decency to them) be used for the latter instead of the former, and that's what makes me itch.
(And to be clear, irrespective of this conversation, you know I am a big fan.)
I don't see how you get there from here.
But I think that should be done after mitigations are in place to protect end users, or if the vendor is not taking good-faith steps to mitigate the problem.
And I am not saying one should be "restrained from speaking" at all. I am saying that choosing to do so makes one an asshole, and that decent people should strive to not be assholes.
You seem to have several deeply misguided premises.
1. We don't know ARM knowingly shipped these chips although they were vulnerable. Bugs happen.
2. Even if this was the case, an individual can show, and ought to, show decency and empathy towards others.
3. This last comment of yours is a straw man and I doubt you are incapable of seeing this. You parent's argument was much more nuanced and elaborate than your rebuttal.
So if a hospital runs a life support on a vulnerable chip, they should just hit the power switch until it's fixed.
Or what about a computer controlling a nuclear power plant? An airplane? Spacecraft or Satellite?
Vulnerabilities don't restrict themselves to equipment that is non-essential for people to survive or would cost millions to replace in consequence of a hack or shutdown (please try to revive a sat after you did a full shutdown, I will be awaiting your report on how you'll align the antenna)
I don't think this is an appropriate way to argue. Sounds like if he disagrees with you, he is somehow below standard.
> It's just...minimal decency, to care about other people.
Alerting folks to the danger that they face is one way to do so. Responsible Disclosure is caring about the vendor, whereas full disclosure gives other people the chance to take action on their own to remove themselves from harm.
As another note, why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.
That is true, but you missed the other side of the argument. Coordinated disclosure is preferable also to a part of users/customers. Significant part of them have no understanding or incentive enough to mitigate on their own. So the question the discoverer of a bug then faces is 'how much headstart should I give the vendor and the users that depend on the vendor, before I make this public'? This has no universal answer, it may depend on how long the bug is out there and what kind of users may be harmed. But it is easy to see that a little headstart in terms of weeks is more reasonable than headstart=0, especially for bugs that are out there for years.
> why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.
Flaws are not always due to cutting corners. Some bugs in computers are very unintuitive and it could be years before they manifest. More responsible development seems like a good idea, but again, this ignores the other part of the problem - major group of users do not understand the intricacies of development and are not willing to buy more 'responsible' product, if it is 5years behind the newest trend and costs 5x as much.
I answered your question. But you didn't answer my question.
What about the flaws that aren't unintuitive? What about the bog standard integer overflows vendors routinely leave in code because they won't pay what it costs to ensure they don't ship them?
By all means, vendors should be taken to task, and be beaten up even more when a bug was easily avoidable. But a bug's stupidity is completely unrelated to how a user might be harmed by an "irresponsible" disclosure. Giving the vendor their just desserts is secondary to that.
I disagree. This does not account for the fact that malicious actors are likely to exploit these before the vendor fixes them on a schedule that they would prefer to dictate. And all users are not incapable of making alternative judgments about the use of vulnerable technology. Users include my Mom, hackers at small companies, giant corporations who are capable of overnight turning off SMB V1.
The harm to users comes from vulnerable software that the vendors put there in the first place.
So are you talking about AMD being dicks by releasing buggy chips, or the researchers somehow being dicks for finding out?
Related question: if a "food security researcher" discovered a vendor was selling contaminated produce - would it be reasonable for them to give the vendor 90 days notice before telling the public?
While I think it's reasonable and appropriate professional practice for _some people/teams_ to go down the "coordinated disclosure" path (I think the world is a better place for having Tavis Ormandy disclose the way he chooses to), it does without doubt benefit the company who's products are flawed more than the researcher or the public. Anybody who knows they work at a firm that's going to be described dismissively like AMD here did "This company was previously unknown to AMD" is quite likely correct to publish-and-be-damned, because you can bet there's a non-zero chance that AMD's response to non-public disclosure is going to include either stonewalling and stringing the problem out as long as possible, or lawyering up ad threatening to sue the "previously unknown to AMD" company into oblivion.
If you don't want public disclosure of security flaws about your products, either don't make flawed products or don't ship them to the public. Especially if some of the key selling features of said product include bullet points like "AMD Secure OS".
This example is absolutely farcical. It's not even close to the same thing and you know it. A security flaw is not equivalent to poisoned food - it still requires outside action to be exploited.
Everybody releasing chips releases buggy chips. It's the current reality of both hardware and software. Unless they do it maliciously, they're not dicks.
Exceptions: issue was known but got ignored due to release schedule, or security was never mentioned in the project and at no level was there any security consideration. But that's for specific management issues, not engineers or the vendor in general.
Everyone makes mistakes; it's more about how those mistakes are handled and if a user's control over their computer is respected.
The goal of responsible disclosure windows have nothing to do with saving face for the company. The point is that it gives the company time to come out with a fix so that their customers aren't left with massive holes in the security of their systems.
Hypothetically speaking, if you are researching vulnerabilities solely for the intent of money (because you can sell to them to 3rd parties or your side hedge fund business can profit from disclosures in the stock market) then shame on you, because you are doing the society a dis-service and gaining on everyone' losses. To me, you are as evil as those hacker who utilize them.
Vendors who wish to discourage that behavior could offer comparably-large bug bounties instead. And, of course, make their products more secure in the first place.
They didn't blow full technical details on this exploit after 24 hours, they went public with a summary... one that's so high-level that many people are even doubting they exist. That's not exactly dumping a zero-day on the internet either.
There's a whole lot of shooting-the-messenger going on with this topic. Making plays against the stock is scummy and possibly illegal, but that doesn't make the exploits here any less real (assuming they are). These are actually quite serious breaks, potentially VMs can jump the sandbox straight into SMM mode and PSP, so it actually is much more severe than just "root password lets you do root things".
There is a long and storied history of showing the disadvantages of your competitor's products. Edison went on a campaign against Westinghouse's AC electricity, culminating in him electrocuting an elephant to death to demonstrate how dangerous it is.
Right now we need more spotlights on computer security than ever, and as long as it gets bugs patched (hardware, software, or firmware) I don't really care who's doing it or what their short-run motivations are. If AMD won't secure their code appropriately and Intel wants to call them out, fine. If Intel is leaking timings through sidechannels and AMD wants to call them out on it, fine.
And if we want to throw stones here, it was AMD who blew the embargo on Meltdown a week early because they wanted to force a response from Intel at CES... different in degree, not really in kind.
Do you know that the general public are usually the ultimate victims and impacted by those vulnerabilities the most?
Especially Intel/AMD are corporations worth tens of billions monopolies in their fields, and if their CPUs with zero days unpatched and sample code and exploitation techniques out in the wild, what else are you gonna use on your desktop computers?
We've seen similar happened for Microsoft after Shadow Brokers's disclosure. It's gonna be worse for hardware products as it's virtually impossible to retroactively fix silicons chips.
It turns out it's exactly the "release a general idea to the public to light a fire under the vendor's ass, only release exact technical details to the people who need to know" that you might expect. They didn't dump a zero-day into public.
"Responsible" to whom? Terms like these indicate what side one takes, such as how one expands the term "DRM": digital rights management means taking the 1%/elite side favored by the publisher, the few in power. 'Digital restrictions management' highlights what's happening from the user's side, the 99%, the side of the many. Similarly with the harm to the users and the desire for freedom in the term "jailbreaking".
So, since we recognize the reporters owe AMD nothing, to whom are they "responsible"? Or what are they responsible for?
This phrase strikes me as useless except to try to foist a responsibility on people that they don't actually have and getting the relatively powerless to serve the interests of power -- users who can't inspect, edit, or share edited CPU microcode are somehow not acting responsibly if they don't give proprietors sufficient notice.
Where is the "responsible disclosure" for Intel when they refuse to let users fully control the signing keys used in the software that sees every network packet before the rest of the computer (for inbound network traffic) and before a packet leaves the computer (for outbound traffic)? The one-sidedness of it all sticks out like a sore thumb.
In the white paper, many attacks are hypothetical and many phrases are vague and slippery, suggesting the "researchers" barely achieved execution of something, not real payloads.
I hope AMD invests the little money needed to fund this sort of PR campaign, er, research initiative, against Intel. The net result would be a greater awareness of the perils of "sponsored" science and of the poor state of PC security.
Dan Guido and Trail of Bits got to read the actual report, and vouched for them as real vulnerabilities. The fact that there are vulnerabilities in signed drivers is a bad thing: it means that AMD shipped cryptographically signed versions of vulnerabilities. Arrigo's twitter thread implied that the use of signed code somehow mitigated the vulnerabilities, but the opposite thing is true.
Suggesting this is "just hypothetical" because "nobody is going to get physical access or code execution in a signed driver" is pretty shortsighted in my opinion...
3. The vulnerabilities are real, but their impact is being overstated because behind the security researchers is a financial firm hoping to make a buck on stock trades.
#3 would seem to me to be a bad development for your niche, if it became a popular business model.
(I have no horse in this race.)
If it was, it’d seem that this research was in support of a financial play similar to how Muddy Waters shorted St. Jude Medical on the basis of insecure medical devices. That would appear to be a legitimate strategy, but if the market didn’t punish Intel for their processor vulnerabilities it seems likely they’d react similarly here and the research would fail to move the stock price in any significant way.
3. The vulnerabilities are real, and something smells real fishy about the way they were released, including what appears to be 4 dudes in a basement.
Except that's not necessarily something to get "outraged" about, just something to keep an eye on while this story develops.
The only one I see shouting "this is an outrage!" appears to .. be made of .. straw?
I’ll add 3. It’s not all about the researchers and AMD, but the people who use AMD chips and deserve a modicum of protection and consideration. Unless there were exploits in the wild, the security of users seems not to have entered into this.
Interestingly, you'll note that the researchers claim public interest as their reason for non-standard practices, but then later it is revealed you need admin privileges to exploit them. The rhetoric the researchers use is inflammatory and staged in a media savvy way like a PR campaign.
This is a totally evidence free assertion and I'm not an infosec person (and am therefore happy to be set straight by experts) but I'll be happy to crack open the popcorn if something interesting is revealed a few years down the line.
I also wonder, what is the purpose of such white hat operations if vulnerabilities are disclosed publicly without anywhere near adequate time for a fix? Isn't SOP to give more time before going public?
Are the reporting parties under any obligation to give AMD notice?
Behaving according to AMD's wishes is not an obligation. Businesses will be the first to tell you that agreements and laws form obligations, not what someone perceives as a nice thing to do.
If not, then you're reacting to a distraction, a detail that doesn't matter: how the corporate-friendly tech press is trying to shift blame away from the party that either sold CPUs with bugs in them (mistakes happen, and this is unfortunate) or distributed nonfree (proprietary, user-subjugating) software which also happens to contain insecurities (a malicious and unjust way to distribute software).
"you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports"
This is the entire point of short selling, and SEC encourages this type of activism. It allows people who can provide expert knowledge to profit off a trade if it can reveal damaging and legitimate information about a company
For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte...
That said, I don't mind that these "research" organizations exist. Only bothers me when they put the general public at risk (or attempt to) for their own gain.
Short sellers want the opposite. So they both present their best cases and let the public decide, much like how lawyers will defend their own clients to the last breath regardless of the amount of evidence against them
So, AMD has vastly more incentive to be accurate than short sellers.
AMDs incentive, like any corporation, is to maximise shareholder value. Same as any tiny little security research firm. If a research firm can maximise their profit buy discovering vulnerabilities and shorting stock before disclosing them, is that any ethically worse than a chip company rushing out flawed hardware with big flashy marketing bullet points claiming how secure they are?
(I'm not saying short-selling chip vendor stocks on the back of vulnerabilities is a way I'd choose to make a living, but surveillance capitalism doesn't seem an "ethically better" industry to work in either...)
As to ethics that's mostly irrelevant to this discussion. Both sides could have ethical behavior, I am simply pointing out which side has the larger incentives to exaggerate. After all the stock could drop and a short seller could still lose money. They need the stock to drop a lot even over a minor issue.
AMD isn't going to crash and burn over these flaws anymore than Intel (at 5 year high) did.
Although I enjoy reading grandparent's counterpoint
It's sad to see people arguing for a return to those norms, especially since the rejection of them correlates with a renaissance in our understanding how to secure software.
I'd say that the intent makes this qualitatively different to what I'd consider legitimate disclosure.
It's not like their marketing copy makes accurate claims like:
"We're reasonably sure our Firmware Trusted Platform Module is trustworthy, but we ran out of time to pentest it properly before we shipped it."
"Ryzen features Probably-Secure Encrypted Virtualization! Our interns couldn't break it in a afternoon of trying! The data looks random enough to us..."
How much does "the intent" of their marketing copy and claims come into play?
Where do you see anyone arguing for that? Or is it just a strawman? What I see is not people arguing against disclosure but people arguing for disclosure with an embargo longer than a day. You're going to have a hard time proving that one day is a norm, or that it correlates with a renaissance in securing software. Your response looks much more like circling the wagons when a member of your tribe is criticized.
It's not like AMD set their chip prices based on "ethics" or "duty to the public". As "the public" I'd prefer a Ryzen 1900X to sell for $150 rather than $500 - It's just a bunch of sand after all (plus some intellectual effort). I don't think AMD get to choose their pricing model but then complain about how security companies price/sell their intellectual work...
A good way for companies to prevent this is to have a generous bug bounty program. Money is still transferred from the shareholders to the researchers, but then the company can impose conditions like delaying public disclosure for a reasonable time to prepare a fix.
Which they should be if the alternative is a much larger loss to the company's share value. The shareholders come out ahead to pay five million on a bug bounty if the alternative is to lose a billion dollars in market cap.
Do you think there's _any_ chance AMD would have offered these guys money in the sort of magnitude they stand to gain short selling AMD?
I'm pretty sure if they'd asked AMD would have responded with a blackmail lawsuit instantly.
Direct quote from: https://viceroyresearch.files.wordpress.com/2018/03/amd-the-...
These guys are slimy as hell, this is disgusting.
> He [Yaron, CFO] is also the founder and Managing Director of NineWells Capital, a hedge fund that invests in public equities internationally.
I wonder how linked the companies are - is this basically a vulnerability research company as a research arm of a hedge fund?
They made a rookie mistake though - AMD is plagued by day-traders and algorithms who couldn't give a damn about the fundamentals.
Boy the future of capital markets is looking grim.
Seriously. AMD stock is trading up 3+% at the time of my comment, and it's climbed since the disclosures this morning.
Something tells me this backfired.
Discl: I've been long AMD for a long frickin' time.
The security angle is a fascinating and concerning new development, however. That said it may encourage more secure practices (as opposed to theater) through the hardware/software lifecycle in response to serious fundamental design problems.
It will also serve to increase the premium on 0days...
I strongly doubt that. I've seen incredibly serious vulnerabilities I've reported firsthand have little to no impact on a company's valuation when publicized.
Now Shopify is now closer to $150...so their plan worked.
If it's false information, isn't that classic stock manipulation? I thought for it to be legal to make money on the stock it had to be both accurate and publicly available (if potentially hard to put together)?
Watch the video and see for yourself: http://citronresearch.com/citron-exposes-the-dark-side-of-sh...
That video by itself tanked the stock for many many weeks, until they finally reported quarterly results and it started climbing again.
I'm glad the CEO didn't feed the trolls by acknowledging this report in any depth.
Also shows how irrational the stock market is in the short term.
Their Shopify video  for example is not the typical „research report“ with lots of specifics but more of a personal opinion with rather broad accusations.
They could have exercised puts if it went down (which it did in the morning) or bought stock/calls both before the site release and in the case of it going down because they knew it wouldn't be a concern or dispelled by AMD.
Unless, this is truly a flaw and in that case, they can still buy more puts and just wait for AMDs official response.
Also, as nothing has been verified about the report (from AMD), there is still the potential for this to move either way.
Great username BTW
This is legit, and they haven't published anything that can be used maliciously.
My guess is that this has to be financed in some part by a group of short-sellers.
What evidence do you have of that other than 'too well presented'? It sounds like a conspiracy theory, not a guess.
That said, it's pretty striking to me how aggressive this disclosure is. It may be an attempt to narrow the window and increase the profitability of a short sell.
It's not uncommon for short sellers to take a position first before releasing a report like this to drive the stock lower. Of course, there are legitimate groups that, in the past, have unearthed real issues and corporate misconduct, but there are also questionable groups that will release reports with little to no substance. This case certainly does looks dubious, but I'd like to see an assessment by reputable security expert.
this is not in the same league but i recall AMD/INTC also traded up on the spectre/meltdown debate. a lot of insecure chips ironically leads to a lot of demand for new secure-er chips.
Yeah, right, this is definitely not being used to affect the share price!
A black hat hacker (or black-hat hacker) is a hacker who "violates computer security for little reason beyond maliciousness or for personal gain"
The personal gain part certainly fits with short selling the stock.
If we can call them "hackers" just because they ostensibly compromised their own hardware or software as a proof of concept for the vulnerability research, does that mean that all of Google's Project Zero consists of hackers and black hats because they get paid (personal gain) by Google to find security vulnerabilities?
Right, and neither did these researchers.
In point of fact, no, the difference really isn't all that stark. It's a difference of degree, not category. You apparently have a problem with disclosing vulnerabilities without providing advanced notice to the vendor, and you consider it especially distasteful to do so if you're financially benefitting from that. But all of that still comprises vulnerability disclosure, which is categorically different from actively using a vulnerability to compromise users as part of a criminal enterprise.
We can go back and forth like this all day, because every time someone bends the definition of black hat to fit something they disagree with, I can form a counterpoint which is technically true but which no one is willing to call black hat behavior, like Google Project Zero. On the other hand, if we use the definition of black hats as criminals engaging in online fraud, augmented by security vulnerabilities, then of course Google Project Zero doesn't qualify. You're going to have a very difficult time broadening the scope of this terminology to suit your definition without accidentally including groups you don't want to be in the same bucket.
And that's precisely my point. If you broaden terms too much, like "black hat" to "stuff with computers in bad faith", we can just weasel in whatever satisfies the definition or agrees with our personal viewpoint. Black hat criminals do not engage in debatable behavior, because it's strictly illegal and directly profits at the expense of other people. At best, all you can do is formulate an abstract argument about people being harmed by rapid disclosure, but that actually comes down to a debate of disclosure guidelines, not a debate of activist investing.
On the other hand I agree with responsible disclosure. And I think that should be made mandatory by law.
And finally, I also agree with some fines for companies allowing these holes to exist for so long. Especially those discoverable by 4 (more or less) random guys.
This is not black and white situation, so don't look for easy conclusions.
These guys are not professional at all.
In any case, if we go by what you're saying, then anyone can define "black hat" to mean whatever they want, which means it's a meaningless and unproductive concept to throw around in conversation.
Your assertion is in a catch-22 here. Words have meaning without requiring an independent body to rigorously define them. The established definition of a black hat is someone who compromises other people using security failures for their own gain. If instead we choose to say that the term has no established definition, then the entire point is moot, because calling someone a "black hat" no longer means anything.
Speaking as someone who 1) works in the security industry, 2) has managed corporate disclosure programs as an internal security engineer, 3) has run a security consulting firm working with many companies, and 4) has reported security vulnerabilities in disclosure programs; no, that's not the reasonably accepted definition. I can't think of any colleague I've ever worked with off the top of my head, nor any widely read security-focused periodical (like Krebs), who would use the term "black hat" for such a generalized disagreement of ethics.
This criticism of the industry might hold more weight if you actually evidenced a willingness to use terminology according to its accepted usage, not as a tool to advance your ethical opinions.
> And this is not a generalized disagreement of ethics.
It actually is, because I strictly disagree that either of 1) trading on bad news, like security vulnerabilities, or 2) disclosing vulnerabilities without notifying the vendor are unethical. You're free to disagree! Your opinion is just as valid as mine; the thing is, we don't define words based on opinions, because then we'd never get anywhere, and we could label people we don't like whatever term we know other people don't like, even if we don't share the same definition of the term. By calling people who do either of #1 or #2 black hats, you're exercising rhetoric that puts them in with actual criminals, doing actual illegal things just because they are doing something you disagree with.
> Bad faith is has a specific meaning and you are unreasonably stretching it.
Okay. I guess I'm free to also call scientists working on whatever thing I disagree with pseudoscientists then, just because I find their work ethically unsettling. Better yet, I could call them criminals.
To save the click: "1. [common among security specialists] A cracker, someone bent on breaking into the system you are protecting."
Your (and hdyr's) looser version is not in common usage and in that sense is wrong.
This is exactly my point. The Jargon file is pretty dated and imo the definition given there isn't really adequate.
My looser version is indeed in common usage. If nothing else 5 HN users seem to agree with my definition enough to upvote my initial comment on the matter.
ideological discussions about disclosure policy aside,
if they are doing this to manipulate stock prices and in doing so create a situation where more actual exploits occur, I'd say that is 'black hat' behavior.. the 'weaponization' is in the 'social engineering' of the market reaction, rather than a direct exploit in this case..
On the other hand, this entire sideshow is bypassed if we use the well-established definition for "black hat", which refers exclusively to illegal behavior involving security vulnerabilities and online fraud. More to the point, reporting facts is not "market manipulation" (which is also a well established term) even if you want it to be, and "social engineering" is not the same as publicizing information with the intent to move the markets. Using these words in the way you are is the same as flippantly redefining them as you go along, with the result that the conclusion is quite brittle. There could be a strong argument that the behavior is unethical, but using these terms as you are doesn't help that point along, it hampers it.
stock manipulation is clearly criminal, if you want to take the 'letter of the law' approach..
beyond this, this gets into the same debate as letter of the law vs spirit of the law, which has both nothing and everything to do with this topic.. black hat is not 'defined exclusively' anywhere, and of course one leaning to a 'letter of the law' argument would then also look for 'exclusive definitions'
as to your point:
> free to call security researchers black hats if they don't give vendors advance notice.
if they are doing this for malicious purposes, yes
if it is for an ideological stance, then, well, it depends on how you view their ideology.
what happens if the law is incorrect?
again, letter of the law vs spirit of the law.
"normative argument about whether or not something is ultimately unethical"
laws are normative arguments about whether or not something is ultimately unethical.. not neutral 'things' that exist in a vacuum. and they can be correct or incorrect, and also incompletely defined..
how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?
Say for example, actively portscanning (legality nebulous) for already infected computers and then overcharging 2000% for cleanup? Then spamming virii from a jurisdiction where it is not illegal in order to grow this 'business'? All legal.. so it's "white hat?" or is it 'grey hat' because it is in a legal 'gray area'? I don't think that's what grey hat means either..
That wasn't the distinction I was making. A law is a positive statement. An argument of what should be lawful, or an interpretation of a law, is of course normative. But I already said that in this thread.
By the "letter of the law" (section 9(4)(a) of the SEC act and existing case law), stock manipulation involves promulgating outright falsehoods. Case law shows us that exemplary falsehoods have to be categorically untrue; a biased presentation of something that is true does not pass the bar. Being that there is a vulnerability here, the material we have to go on does not paint a favorable outlook on the researchers being indicted. Activist investors routinely present facts to the media with a clear agenda, but the SEC virtually never prosecutes them if there is an inarguable, material kernel of truth to their allegations. There's a vulnerability here. Reasonable people can disagree on the severity of the vulnerability and how it should have been disclosed. But it's not fraud.
> how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?
Your question has a presupposition; if the security researchers traded on their knowledge of this vulnerability, I find that to be neither unethical nor illegal stock manipulation.
that it is specifically tied to this case.
If people want to bend over backwards to make an argument about the abstract way in which people are harmed by small disclosure windows, activist investing or information asymmetry in the market, they're free to do so. But none of those things qualifies as black hat behavior. Definitions require precision to be useful, and you throw all precision out the window if you decide to lump people with disclosure habits you dislike in with organized criminals stealing identities en masse.
I agree with the sibling commenters here. This is a bad faith, financially-motivated disclosure with insufficient time given to AMD to react
The terminology is not flexible, it has a well established meaning. If your bar for a black hat includes legitimate security researchers disclosing vulnerabilities in a way you don't like, you've just expanded the group of people we can call "black hats" almost arbitrarily. You're putting security researchers you have a normative disagreement with into the same group of people who commit actual fraud, steal identities and sell your credit card data.
"...we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports."
Are they shorting AMD? https://amdflaws.com/disclaimer.html
Don't tell HD Moore or the Metapsloit team about this, though. They may cry themselves to sleep tonight.
> "we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports"
Two sides of the same coin
The problem with something like TRO LLC is that markets don't move on security info.
"RYZENFALL allows malicious code to take complete control over the AMD Secure Processor."
"Multiple vulnerabilities in AMD Secure Processor firmware allow attackers to infiltrate the Secure Processor."
If this is legitimate, this is huge! The PSP could potentially be disabled! Very little work has gone into handicapping the PSP compared to the IME.
I'm gonna wishfully think this was intentionally done to allow us to disable the PSP.
Who works for CTS-Labs? Attaching your name to a company like that should disqualify you from any future jobs in the security space.
"Unheard of"? People have dropped serious vulnerabilities with _zero_ warning before.
> "Unheard of"? People have dropped serious vulnerabilities with _zero_ warning before.
Could you point me to an example of a zero warning disclosure that exposed a large amount of users without first attempting to coordinate with the responsible party?
Individuals sometimes do this, security companies very rarely - and both are shunned by the infosec community at large when they do so, as this is very unethical behaviour.
They registered the domain a couple of weeks ago - why give AMD only 24 hours notice?
In this case it does seem highly likely there is some stock market skullduggery afoot.
Just take a look on twitter at what prominent members of the community are saying - they are not impressed with this behaviour. I'm also a member of that community, and hold the same view.
The vast majority of the infosec community promote coordinated disclosure.
The CTS-Labs people are taking shit from vulnerability research twitter for overhyping the findings (meaning: they released a report on a day ending in "y"). People are noting the connection to the short selling --- but since this will be the 3rd or 4th time someone has very publicly done that, I don't see anybody shocked or outraged by it.
But this public ostracism you referred to --- specifically the notion that dropping vulnerabilities with 24 hours notice would reliably generate it --- is fictitious. I'm not sure how you can be a part of the vulnerability research community and believe that there is public shunning attached to dropping zero-days, since many of the best known people in the community have repeatedly done exactly that.
I'm not saying the non-security researcher users on HN have an opinion representative of the public as a whole, but this comment and a previous question asking another user what security research they've published may point to such an ethics disconnect between security researchers and the broader populace -- or simply a disregard for the concerns of the broader populace. I think it would be beneficial for security researchers (or any professional group) to listen to ethics concerns of the broader group they're a part of.
On another note, I would also assert that abusive actions by vendors do not excuse abusive actions by researchers (and vice-versa).
As am AMD system owner, I would much prefer that big flaws were disclosed in a coordinated manner with AMD - giving them a fair chance to verify and find a solution, rather than giving bad actors a head start.
 I'm partially recalling a fix, on Facebook I think, that was implemented within a few hours of reporting; it was ac testing API that got exposed. Different field, of course.
Follow the money.
Other theories discussed here seem less far-fetched than the above, but in any case, it does smell funny.
Sounds like the capabilities include the ability to jump outside a VM sandbox, take over the PSP, and pivot to the firmware or BIOS exploits.
Ian Cutress of Anandtech appears to be quasi-vouching for Dan Guido. Ian is also interviewing CTS Labs tomorrow morning, and looking for questions.
I'm not a professional security researcher but this is looking pretty darn flimsy. I also don't see any proof of concept code anywhere -- the "whitepaper" seems to just claim these things exist with very little mention of how to exploit them. Compare against Meltdown/Spectre, which was highly technical and had lots of PoC code. This just says "Upload malware to the processor" without further comment.
I'm not saying they didn't find anything, but whatever they found, they've hardly disclosed it.
No, I don't want HDCP or any similar crap; let me run my servers and desktops in secure mode.
Physical damage to hardware (SPI flash wear-out, etc.)
Reminds me of little kids trying to fill out their 200-word essays.
> "we are letting the public know of these flaws but we are not putting out technical details and have no intention of putting out technical details, ever"
It's always a risk, because now people know where to look to recreate it themselves, it's not like this is a full-disclosure release where you're SOL as a manufacturer and have to race rampant public exploitation.
Insider trading claims might be difficult since you can claim the vulnerabilities were public knowledge waiting to be discovered, but...
Can you trade on knowing the security disclosure timeline prior to your publication of the vulnerability? That would seem to be insider knowledge until AMD authorizes publication. E.g. I've got knowledge that AMD likely wouldn't be able to fix the flaws prior to my disclosure. That knowledge would inherently be non-public.
Disclosure: I've been long AMD for a while.
Imagine someone buying stock and then saying the company is good. Not very controversial is it. Warren Buffet does it. Shorting stock and saying the company is bad is just the flip side of it.
In fact, there are equity research companies that do specifically that (e.g. Muddy Waters). Whether that research holds water or not is for the market to determine (AMD is up on the day).
Correct. I'm not referring to this. I'm referring to trading on information discerned from communications with e.g. AMD but prior to disclosure of the vulnerability, especially if those communications which establish e.g. timelines are only disclosed after trading
Hence my point about trading upon understanding AMD's response timeline e.g. from emailing them.
AMD doesn't have the power to prevent publication of research from third party researchers that haven't entered in an agreement with them. This definitely isn't insider trading.
Correct, though this assumes AMD has yet to reply. If AMD did reply after the initial disclosure and before any trades were made, then the order of events which may warrant such a look would be:
- Private Disclosure made.
- AMD replies privately with anything substantive. This could include a timeline, or even an indication that a fix may take a while.
- Trades made.
- Public disclosure made.
Should events follow this sequence, trades made would have been informed by private knowledge from AMD that had yet to be released.
I'm not saying this is how it played out, but if it did, I'd suspect some amount of legal exposure.
Their replies are only private if the discloser keeps them private. The discloser is free to publish their correspondences with AMD which may include a timeline.
Regardless, this entire publication can happen without once talking to AMD. There is no need to notify or correspond with AMD in order to publish the security research and short AMD's stock.
Sure, but if trades are made and correspondences are subsequently released, those trades were made with insider information is my point.
The interesting one is against Promontory. It still requires VM host access to exploit so the impact is limited.
90 days is not a standard. Nothing was shortened. People are allowed to publish their research whenever they like. Vendor advance notification is optional.
Full, immediate disclosure is responsible.
Well, fuck 'em, I guess.
Responsible disclosure, contrary to the super-cool leet kid notions expressed by people with who choose to exhibit an underdeveloped social conscience, is not doing a solid for the companies who have vulnerabilities. It's for the users who consume things. Security researchers are effectively taking upon themselves a role of public service. That comes with responsibilities to the public, not to AMD or whoever.
Meanwhile, this crew looks like they briefed the media before telling the vendor, which is all kinds of fucked.
1. All of the relevant people, i.e. "the users downstream of bugs" are already vulnerable.
2. It's possible, maybe even probable (or likely), that people, other than the researchers that are disclosing the vulnerability, have also discovered the same vulnerability and, furthermore, that those others can exploit the vulnerability.
3. Every delay in disclosing the vulnerability prevents the victims from protecting themselves from any bad actors mentioned in  thru means more drastic than applying a patch or similar from the relevant vendors (e.g. taking the affected components offline or otherwise making them unavailable).
The argument hinges on the probable size of the bad actors mentioned in . If you assume that the disclosing researchers are the first people to discover the vulnerability, then it would possibly be best for them to first disclose the vulnerability to the relevant vendor or vendors. But note that even vulnerabilities disclosed to vendors can be leaked to bad actors.
And if you don't assume that the disclosing researchers are the first people to discover the vulnerability, then not disclosing ASAP prevents people from protecting themselves.
I think a cursory look at the world indicates that this is not even adjacent to reality.
Some high-value targets (e.g. key infrastructure, parts of government, major enterprises) have dedicated security teams, and can come up with a pretty decent response if given the appropriate information. Divulging vulnerability information widely, in particular, may or may not be a net benefit to them. (Consider e.g. Linux vendor vulnerability lists.)
Other high-value targets (e.g. journalists, human-rights activists, etc.) are utterly outgunned by their adversaries (who can afford to buy or find new vulnerabilities), and can only hope that something causes vendors to consistently write software that's sufficiently-uneconomic to exploit. In the sufficiently-long run, proponents of full disclosure would argue, anything that increases the cost of shipping vulnerable software should help these users.
(Disclaimer: absolutely not speaking for my employer here.)
I agree that some proponents of immediate disclosure would claim that their actions encourage vendors to ship less vulnerable hardware or software. I do not believe that that, in the general case, is why it is being done. And I am certain that that, in this specific case, is not why it was done.
However, overall, I agree with you. Person with exploit needs to compare the probable consequences of disclosing at time N vs. disclosing at time N+1.
If it's being exploited in the wild and users can meaningfully self-protect, disclose now!
If the vendor will probably have a patch in 2 weeks, there is not widespread exploitation of the vulnerability, and disclosing now will cause widespread exploitation, disclose in 2 weeks.
If the vendor seems like they will never issue a patch on their own (because significant time has elapsed), such that at some point in the future there's going to be widespread exploitation and you're only hastening that a bit, go ahead and disclose now.
Hiring a PR firm before disclosing to the vendor is not responsible. Briefing select press before disclosing to the vendor is not responsible.
I can go with immediate, or I can go with never. But realize that every vuln is different, and their impact (or hardship of writing or applying patches) may not always be fully understood by stakeholders involved before or immediately after the details are released [CVE-2015-0235].
We could always have the government regulate this instead though, instead of being professionals and self-regulating.
Fortunately for us all the actual exposure is minimal.
Is this confirmed yet?