Hacker News new | past | comments | ask | show | jobs | submit login
Security Researchers Publish Ryzen Flaws, Gave AMD 24 Hours Prior Notice (anandtech.com)
401 points by andrepd on March 13, 2018 | hide | past | favorite | 336 comments

Amazing coincidence!

On the very same day this information came out, 'Viceroy Research Group' managed to release a 33-page 'analysis' of these results. With illustrations.


>We believe AMD is worth $0.00 and will have no choice but to file for Chapter 11 (Bankruptcy) in order to effectively deal with the repercussions of recent discoveries.

Viceroy Research lists no employees or contact address, but it appears they are not a crack team of hardworking & incisive business analysts, but two Australian teenagers and a former UK child social worker, struck off in 2014 for misconduct.

They have previous form in producing or plugging short-call stories (quite effectively), and latterly investigated by South African media for similar shady business.


It took very little internet sleuthing to find this stuff out. None of the tech press bothered to do so.

Disclaimer: I have no position in AMD.

Edit: link to Viceroy https://viceroyresearch.org/

If you look at the metadata of both the white paper and the analysis, you can see that the creation time of them is only 2 hours, 50 minutes apart.

And that's the creation date, not even when they were published.


(Replying to myself because I can't edit my post anymore)

Edit: And it gets better! If you check the HTTP headers when requesting the whitepaper from their servers, it will tell you that the file was placed there (last-modified) at 13:22 GMT, so just 1 hour before Viceroy Research Group created their analysis - and probably ages before the actual news broke.


If they did this just to short AMD and make money, that's indeed quite shady, and they go through all the trouble of hiding their real intentions because they also know it's super-shady.

That said, unless the whole "research" is fake, I wonder if we could be seeing more such tactics in the future against tech companies, and whether or not that would give them an immense incentive to care about security - or risk getting ruined in the stock market.

Honestly, such a huge incentive may actually be needed to get most companies to get about security. The money equation needs to make sense to them. Right now most think investing the absolute minim amount in security for compliance reasons is already too much money wasted on security. If this were to become common, I think maximizing security would actually start looking quite profitable to them.

I mean, this research is already saying there are some backdoors in AMD's chips. I imagine in the future, companies would be way more careful about allowing backdoors in their products, whether intentionally or by mistake, if they knew they risked getting their stock crushed.

So yeah I just like to play with this idea a little bit. So far this revelation doesn't seem to have had the "desired" effect by the backers of the research, though, but we'll see. I just want to know whether or not the research is real, so I'll wait for AMD's confirmation. I assume AMD wouldn't try to lie to us about it, because there are now probably at least a dozen security teams trying to pick AMD's chips apart, so the flaws would be found soon enough, if real.

Well, this could be interesting. AMD is a US listed security. If true, these two lads could very look forward to a visit from the US SEC. Seeing as how market manipulation is not a capital-crime, I don't see Australia objecting to an extradition, should charges be warranted.

Which exact crime are you alleging, specifically? Plenty of short sellers investigate companies and their products and make investment decisions based on their findings.

I work in finance, and one of the many hats I wear at the small ATS (Alternative Trading System) I work at is regulatory analyst. Action probably wont start with SEC, but possibly FINRA or any exchange they're trading through. This definitely wreaks of manipulation. If I knew of any trades on this came through my ATS, it would be my legal and ethical duty to report it. I could still be asked to provide all trade activity for AMD.

They tend not to weaponize those findings putting innocent people in harms way.

Perhaps so, but that's not a crime. There's nothing illegal about trading on your own private research.

Trading on research, no. But attempting to artificially manipulate the market while doing so is effectively "pump-and-dump" but short instead of long. A lot comes down to timing and exactly what the communication says.

Not a sure-thing conviction, but certainly a dangerous business plan.

Matt Levine wrote about this recently, and comes to a somewhat different conclusion (though he's not a lawyer):


> If you think a company is bad, or fraudulent, you can sell its stock short and try to profit when everyone discovers its problems and the stock drops. If you want to hurry that process along, you can always noisily publish research reports explaining why the company is bad or fraudulent. If your research reports convince other investors of your thesis, then the stock will drop, and you will make money. There are more longs than shorts, and more dicey public companies than noisy short hedge funds, and so people who use this strategy tend not to be especially popular. In particular people often go around accusing them of fraud, or market manipulation. "Wait," people ask, "how is it not manipulation to short a stock and then publicly announce that the stock is bad?" I am always confused by this complaint. Just flip it around: It's not manipulation, surely, to own a stock and then publicly announce that the stock is good.

(Followed by further justification of this position).

But he more or less assumes that you are stating an opinion on the company, rather than misrepresenting or downright inventing bad news.

True, that's the actual crux of the question here; if you are inventing bad news and trading on it, then by my reading you're probably engaging in stock manipulation.

On the other hand, it seems that uncovering new true information, and then taking a short position on it, not illegal.

A lot of the comments in this thread were assuming that there was some crime just from reporting the bad news and trading on it, unconditionally on whether or not the news was true.

Apparently known as "short and distort". See also "When Does Short Selling Become Manipulation?":


From that paper, 'The term “manipulative... connotes intentional or willful conduct designed to deceive or defraud investors by controlling or artificially affecting the price of securities."

If the claims you're making are true, then it's not deceiving or defrauding, even if the way the information was published was immoral under standard professional ethics.

If these vulnerabilities were misrepresented by the short sellers that funded it, then I suspect that would bring them into stock manipulation territory.

Hmm ... I was more tempted to dig into connections between the Israeli firm and Intel but your analysis is way better!

Why is AMD 1.04% up today then (while technology index is -1.16%)?


    AMD	$11.64

Look at the interday pricing. There were some small slumps. Clearly most traders did not react too negatively to the story, at least by the end of the day.

I even thought Meltdown/Spectre was overblown, and the average user will never see these attacks.

Then you were wrong, since those attacks against unpatched, unhardened hosts are trivially weaponizable through browser Javascript.

They're weaponizable when using a small and rapidly shrinking percentage of unpatched browsers running JavaScript delivered by extremely uncommon websites.

That's true because the vulnerability itself wasn't overblown, and was immediately patched.

I am particularly talking about things like Intel's stock price though

Oh. Sure, I buy that. I don't believe patchable vulnerabilities really exert major pressure on stock prices.

>JavaScript delivered by extremely uncommon websites

All it takes it emailing them a slightly convincing link, and they're running javascript from one of those "extremely uncommon websites". It doesn't matter how common the website it, a single website can compromise millions of users.

Attacks only get better.

Can you point to a javascript example?

I can think of a number of approaches, but nothing I could catergorise as trivial.

First hit for googling "Spectre Javascript POC": https://github.com/ascendr/spectre-chrome

> Enable `#shared-array-buffer` in `chrome:///flags` under your own risk...

SharedArrayBuffer was disabled exactly because vulnerabilities like this are easily exploitable (but there are POCs that don't depend on it).

It was only disabled as a mitigation to these specific attacks, in case you though it was an experimental or “at your own risk” type of thing.

Disabling SharedArrayBuffer is just stopping the most obvious method of exploitation; it's by no means a fix. Expect a slew of papers over the next few years on other methods of exploitation from JS.

Every single browser had to disable that feature because of those flaws.

>All of the exploits require elevated administrator access, with MasterKey going as far as a BIOS reflash on top of that. CTS-Labs goes on the offensive however, stating that it ‘raises concerning questions regarding security practices, auditing, and quality controls at AMD’, as well as saying that the ‘vulnerabilities amount to complete disregard of fundamental security principles’. This is very strong wording indeed, and one might have expected that they might have waited for an official response.

Extremely fishy. 1-day notice? Such aggressive wording without even the chance for AMD to address the concerns?

Yeah it's suspicious. The website[1] has many fancy infographics, marketable names and fear mongering but you have to dig into the whitepaper[2] to find any details about the actual vulnerabilities. And even then it starts only on page 8 of 20 and you discover that it's vulnerabilities targeting the secure boot infrastructure and you need local admin to exploit them. It's not good but it's not a new Spectre or Meltdown.

If I was the tinfoil hat type I'd guess that Intel is trying to spread FUD but maybe it's just security researchers trying to generate a bit of buzz for their company at the expense of AMD.

[1] https://www.amdflaws.com/ [2] https://safefirmware.com/amdflaws_whitepaper.pdf

It's possibly even more nefarious than that: 1) execute a series of puts on AMD, 2) release exploit 3) profit. If you execute an option with a far time horizon and give the company enough time to mitigate their vulns, then I think this is not an irresponsible thing to do (as it incentivises the company to actually do something), but with 24 hours notice...

Seeing as CTS-Lab's CFO also founded a hedge fund you're probably on the right track.

>Yaron co-founded CTS-Labs in 2017, and previously served as an intelligence analyst in the Israeli Intelligence Corps Unit 8200. He is also the founder and Managing Director of NineWells Capital, a hedge fund that invests in public equities internationally. He holds a B.A. and M.A. from Yale University.

Basically a follow the money situation.

Could something like this be considered inside information? Or is it legal to actively manipulate stock prices to ones benefit in this way?

> Could something like this be considered inside information?

No, illegal insider trading refers to trading on inside information when you have a confidentiality agreement or a fiduciary duty. Information asymmetry is insufficient (or else it would be virtually impossible to profitably trade at all).

> Or is it legal to actively manipulate stock prices to ones benefit in this way?

The way you're presenting this is a false dichotomy. It's not "manipulating" stock prices except insofar as people broadcast news all the time which alters stock prices. Strictly speaking, it's not market manipulation if it's true. If it's false, it can be, which is why you really try not to do it unless it's true.

Might the latter also depend on how you present it?

As far as I can see this is only an exploit of secure boot if you are already on ring 0 level auth. Making a whole webpage with lots of graphics and whatnot, sending press releases all over and in general present it like a security flaw on the level of meltdown seems .. false?

Probably court level material.. In any case it seems to have backfired as the stock is up.

I think the difference in facts you're talking about is a difference of degree, not category. In other words, it's not plainly false, there certainly is a vulnerability, it's just perhaps exaggerated. I could see a case being brought against the researchers on those grounds, but I'd be really surprised if anything came of it.

It could be interesting. Some of the flaws presented require you flash your bios, if I understand correctly. They are included with what are likely real flaws, but maybe it's enough for a case of misleading the public in part. To me it seems sort of like saying Ford engines have a tendency to blow up, but only after you've overwritten some engine firmware. By itself, not much to talk about, but when attached to something that has indications of being used to make money through stock changes, maybe is more likely to be looked at unfavorably by the SEC?

Hopefully someone with more experience can answer this, but could the stock be going up due to high short interest?

Maybe not technically a short squeeze (https://en.m.wikipedia.org/wiki/Short_squeeze) but related?

What if it's not false but misleading? That sounds closer to what they are doing here. Sure they included a ridic disclosure agreement that you apparently agree to have read if you continue to read their webpage, and it says something along the lines of "this is our opinion".

It feels slimy and gross.

"Slimy" and "gross" are not nearly sufficient for either insider trading or market manipulation. I don't particularly like the way these researchers are acting either, but that's actually because I don't like vulnerability impact being exaggerated and over-hyped. The other stuff doesn't bother me too much.

If the news pushing a stock price is so misleading that it's categorically different from the truth, then I could see a case for market manipulation being brought against them. But I doubt that will happen, because unfortunately people have broad latitude to portray vulnerabilities however they'd like as long as they're convincingly authentic.

I was just relaying my feelings in that last bit. But the misleading bit in my comment specifically refers to the fact it's an exaggeration as you put it. So you can flash a bios and make it do nefarious things: that is a true statement but it's hardly a "security flaw" and almost not even too surprising. They are however taking a flaw and spinning it out to be the massive security flaw which it isn't.

I guess that's my question. If you take a true statement and put it out with connotations that it's actually a terribly thing and worse than it is, just being "true" doesn't matter. Stupid analogy, say you go to a bakery and buy bread, then it goes stale in two hours or so by the time you get home. Then you go and write a negative yelp review that this bakery is terrible at baking and they sell subpar bread. You just don't know that the bakery doesn't bake in preservatives that bread you buy from the supermarket does. And it certainly isn't like the bakery is selling spoiled/poisoned bread or selling rocks painted to look like bread loaves.

To add to the other comments here, a recent high profile case of something similar was Bill Ackman shorting Herbalife. Basically, he shorted the stock and then went to the media with his research showing that he believed Herbalife to be a pyramid scheme. Ultimately, I believe he lost money on the whole fiasco, but it's not an uncommon strategy. The whole thing made the news after a particularly amusing exchange between Bill Ackman and Carl Icahn (who had the opposing view) on CNBC: https://www.youtube.com/watch?v=hCZRk1lL90Q. I worked on Wall Street at the time and I remember the entire trading floor at my bank was almost frozen as they were watching this live.

Regarding Ackman vs Herbalife, the old adage comes to mind: The market can stay irrational longer than you can remain solvent.

I see.. So it could be a viable business to do research like this, short stock and then release the information. It just has to be a bit more damning than this as the stock price is actually up today!

Seems a bit shady in any case if one were to do this in the same way as when companies pay researchers to make claims publicly that benefit the company like the process leading up to the banning of led in petrol etc.

No. Doing independent research is not “inside information”. This is actually pretty close to how a market is supposed to work.

It’s also not market manipulation to publish factual information or opinions. Only knowingly publishing false information would qualify.

Well, if it was having an effect I'd be buying AMD right now against the market but it doesn't seem to.


I tend to imagine that if Intel were doing this they’d do a better job of it. Even if CTS-Labs are completely legit, the way it’s been done has led to immediate suspicion of the claims and people involved, in a way that feels much more like a small group straining for attention or to make a quick buck and making a bit of a mess of it. If Intel were involved, I’d expect it to be done more professionally and simply better, so that people don’t suspect foul play and go looking for problems. It is possible for a company to deliberately obfuscate the trail by doing it this way, at the probable cost of some effectiveness (though if the claims are overblown it might perhaps be more effective this way), but it seems less likely.

A really fun interpretation of it is AMD doing it themselves, deliberately badly, so that they can come off as the wounded party that actually have really good hardware. Risky, but probably not impossible to carry off.

While fun, it would also mean they would be publicly disclosing vulnerabilities in their own systems and then deliberately withholding the patch, just to put on this shpiel in order to appear as the underdog?

Confirmation that the vulnerabilities are legitimate pretty much writes this one off. At the time I wrote it this wasn’t entirely clear, though it seemed probable (though even then they could have been overstated).

Well, one good thing that could come out of this is that it might tamp down on the use of scaremongering names for security vulnerabilities.

People here seems to be mentioning short sellers being connected to this research as if there's some sinister collusion going on. This is the entire point of short selling, and SEC encourages this type of activism. It allows people who can provide expert knowledge to profit off a trade if it can reveal damaging and legitimate information about a company

For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte....

Without this incentive, why would anyone bother to reveal damaging info? You're placing your self as a target with no reward. The payment is the natural balance of the market.

So yes, this research firm is connected w a hedge fund, and they have a very vested interest. But that doesn't make their claim untrue

You mean to tell me my machine can be exploited if I let someone do one of the following.

1. Flash the BIOS 2. Have admin access

Holy shit, this calls for a full fledged panic!

I am very disappointed anandtech.com even bothered to give this smear campaign the time of day. If someone can flash your BIOS or has admin access then you already have way bigger problems.

CTS-Labs is very forthright with its statement, having seemingly pre-briefed some press at the same time it was notifying AMD, and directs questions to its PR firm. The full whitepaper can be seen here, at safefirmware.com, a website registered on 6/9 with no home page and seemingly no link to CTS-Labs. Something doesn't quite add up here.

Anandtech is reporting on the situation more than the flaws. That does require covering what the flaws are though. Not covering it at all isn't exactly performing good journalism either.

Independent researchers don't owe AMD a chance to address anything. They bought the chips on the open market where AMD makes them available, and then used their own time and materials to conduct their own research. Their work product is their own, and AMD has no claim to it.

There are, as I see it, two rational, coherent ways to be outraged about this story:

1. The vulnerabilities are fabricated and the report is fraudulent, in which case, by all means, slag the researchers.

2. The vulnerabilities are real, in which case. AMD is an 11 billion dollar company that got outmaneuvered by what appears to be 4 dudes in a basement.

People use AMD chips. It's about more than AMD's stock price.

I do not need to be a security researcher to understand that they, as with everyone else, have an obligation to the body politic to not be a dick (as in all things!). There are actors who may be aware of this attack already--but, as I mentioned elsethread, wider knowledge of attacks like this have a much higher chance of splashing back on end users who literally don't know any better than it does AMD. I mean, I couldn't give less of a shit about how AMD feels--they'll be fine regardless--but there are people downrange of this, not just some company.

This is shoot-the-hostages stuff, and I believe that you are better than to be OK with that.

You can consult the search bar at the bottom of the page to learn that I am 100% OK with immediate, uncoordinated disclosure. It's not what I personally do, but that's easy for me to say because I don't find these kinds of vulnerabilities.

This isn't "shoot the hostages". The researchers didn't manufacture the vulnerabilities; AMD did. If 4 dudes in a basement can find exploitable driver vulnerabilities, so can 10 researchers none of us will never have heard of working in a nondescript office somewhere in Bulgaria. The only moral differences is that these 4 dudes told us about what they found --- something else they had no actual obligation to do.

Again: it seems really likely that these vulnerabilities have been hyped way out of proportion to their real impact. I think it's reasonable to be irritated by that (again, though: this isn't a first). But other than that, I don't understand how people arrive at the conclusion that independent security researchers owe strangers the results of their work.

I understand what you are OK with. I am saying that I believe, from a fairly long scope of interaction, you are a better person than that.

They've disseminated widely an attack strategy to people who didn't have it. Nobody except AMD can fix the problem, regardless of the good intentions of other actors--on the other hand, many bad actors can use that information. That's as shoot-the-hostages as it gets.

Security researchers owe "strangers" (which is a really weird term for "society at large" that I don't think you, specifically, would be using with such connotations outside of a security context where you'd already made a decision) the same courtesy they owe everyone else: to not endanger people unnecessarily. I agree with you that this is a relatively minor vulnerability, I'm not hyping it or anything--but it's still a vulnerability, it is still more widely known now, and there is a bigger pool of bad actors than there was last week able to use it against people, irrespective of AMD's stock price.

There's certainly a gray area, if a vendor hasn't acted to fix something you know they know about. I'm not talking about that. But 24 hours and briefing the media before letting AMD know, as it very much seems like they did, is well outside of what I could consider any reasonable gray area.

If you care about end users, and you should because they are your fellow people, you don't publicize how bad actors can hurt them. You just don't. It's just...minimal decency, to care about other people. I can't see it any other way.

I strongly disagree with the reasoning you're using here.

The premise of your argument is that without vendor cooperation, end-users are helpless to mitigate the impact of security flaws.

No, they aren't. Not only are they not helpless, but many of them are in fact ethically obligated to mitigate exposures with or without the assistance of their vendors. Almost every end user has at least one last-resort mitigation for any vulnerability: the power switch.

Most of the time, most users have better non-patching mitigations than that. These vulnerabilities are all post-compromise privilege escalation flaws. Their exploitation is situational and most users can do things to eliminate the situation that enables their exploit.

You might not like the fact that end-users have to make hard, expensive choices about how to mitigate flaws. But if you think about it for just a second, you'll see that the idea that patches were saving them from this choice was fallacious. There is no reason to believe these 4 dudes were the only ones in the world capable of finding these flaws (the reality is that if they're the only ones who know about them, it's because the kinds of flaws they found simply aren't important enough to demand focused attention from others). All restricted disclosure does is prevent end users from making the choice for themselves.

I believe that as a general rule, we're better off when we have the most information available to us about vulnerabilities. Personally, I'd probably stop short of publishing exploit code. But other researchers that most of us respect a great deal in the abstract do not have that particular scruple, and some --- like the original Metasploit project --- made it a point to publish exploit code immediately, patch or no patch, to arm operators with information about their exposure.

This isn't an idle opinion. If there was working Usenet search in 2018, you could find me making approximately the same argument back in the 1990s, when I worked as a researcher at SNI, the world's first commercial vulnerability research lab.

> These vulnerabilities are all post-compromise privilege escalation flaws

I would say they are all invasive evil maid threat vectors. Each one requires either physical access to the hardware or (as you stated) an already established root privileges. We all know that if you have physical access to hardware, it's essentially game over.

However. One of the vulnerabilities supposedly allowed to subvert UEFI secure boot. If that's true and allows to boot arbitrary media, then the others are equally feasible, because an attacker can boot into a root shell of their choosing.

The timing in this disclosure reeks of malice, though. Giving a 24h advance warning basically allows the outfits to claim that they disclosed vulnerabilities to manufacturer before going public. Technically true. Just highly misleading and dishonest.

I have personally no beef with full disclosure, and have advocated it as a viable mechanism since the mid 1990's. I also happen to think that responsible disclosure is a good approach, but it definitely needs the threat of FD as a stick, because otherwise vendors would not have any real incentives to work on addressing security bugs. Name-and-shame does work.

Let's get back to AMD flaws. Giving a really short window? Basically just enough to have an initial PR response ready? Have the decency to go full disclosure. Or give a full month. AMD won't be fixing the bugs before news breaks in either case. Just don't claim this is anything but a maliciously crafted exercise with ulterior motives.

While I'm fine with criticizing them for partial disclosure, I again have a problem mapping any of this back to ethics, because, again, independent researchers do not have an obligation to vendors or to any amorphous public. As long as they aren't literally exploiting (or arranging to have exploited) vulnerabilities to break into people's computers, or lying about what they found, I don't think ethics have much to say about what they should do.

> I don't think ethics have much to say about what they should do.

What does that even mean? What do you think "ethics" means? This is a nonsensical statement.

The consideration of what people in certain situations should or should not do, IS ethics.

Even if someone would say (for some reason) "but researchers should be able to do their work without consideration", that is making an ethical statement.

I understand why you would have a problem mapping this back to ethics, because if you'd formulate it as such, it would sound kind of bad: Researchers have no ethical responsibilities to the public.

You can't choose to not let decisions be guided by ethics, that's like claiming you choose to find your way without navigating. It makes no sense.

No obligation to vendors, no obligation to the public, so what are your ethical standards exactly? It sounds like committing crimes is it, but that’s a legal standard and not an ethical one. At what point are you less of a researcher and more of a sociopath with a keyboard? What makes researching software vulnerabilities such a uniquely non-ethical undertaking compared to all other forms of research?

You seem like a living argument for ethical standards being imposed on your industry, by law if needed.

In exactly what way are you harmed by someone discovering a vulnerability --- that existed whether or not they did the work --- and then telling you about it?

You're arguing that the force of law should prevent you from learning inconvenient things about the software you use.

You are not harmed by someone discovering a vulnerability and telling you about it. Obviously that benefits you rather than harming you.

You are harmed by them discovering a vulnerability and telling the world about it.

And if they discover a vulnerability and tell both you and the rest of the world, the harm may easily outweigh the benefit.

Suppose I go wandering around the city where you live, checking for unlocked house doors. I find that you've left your front door unlocked and gone on holiday. I then wander the streets shouting "Thomas's house is unlocked and no one's at home!". I also phone you up to let you know your house is unlocked.

It was your fault, not mine, that the house was unlocked and no one at home to deter burglars. In principle, anyone else could have come along and burgled your house, if they'd found it before I did. None the less, I think that in this scenario I have done you wrong.

> and then telling you about it?

The argument against your position that people are trying to get across to you is not that. It is that publication of vulnerability without giving heads-up and time to prepare solution to the vendor greatly increases the risk that a user will be harmed by attackers exploiting the public knowledge. Often substantial number of users are not going to mitigate or resolve the problem without their vendor giving out the official solution.

And if I don't want to jump through whatever random hoops message board nerds have erected and just decide not to disclose at all, exactly how are you better off?

From this and other similar responses of yours here I think that you do not have a convincing way to resolve the obvious problem with the absolutist 'i can do whatever i want with my research' stance that people here pointed out to you. So you do whataboutism directed at vendors, misrepresent people's arguments or try to pivot the discussion. Perhaps it is time to write less and let the discussion sink in a little. You may find a better way to argue your point, or even find you no longer want to do that.

I don't think anyone's arguing that a researcher has a responsibility to tell anyone. If they find a vulnerability and then decide to completely shelve it, that's fine (if maybe a little pointless?). But if they do decide to do some kind of disclosure, I (and others) would argue that researchers have an ethical responsibility to do so in a way that they believe will do the least harm.

It's certainly reasonable to argue which kind of disclosure is the best way to achieve minimal harm, but my opinion is that it's unethical to disclose without considering what method of disclosure will do the least harm, or, worse, just not caring and going for the "biggest splash", as is what it seems these researchers did.

”The premise of your argument is that without vendor cooperation, end-users are helpless to mitigate the impact of security flaws.”

I know everyone in my family is ignorant of this “disclosed” security flaw and is powerless to mitigate the vulnerabilities disclosed on their own. Even if they did know to “turn off their computer” as someone said, are they supposed to wait until someone calls them to tell them a patch is ready?

Disclosing a vulnerability for profit at the expense of everyone else is a shitty thing to do. Would giving AMD a few days to fix it have hurt as many people as giving them one day?

How many vulnerabilities are you capable of finding in software that everyone in your family uses, and can't find for themselves? I'm sure the number is not zero. Is it unethical for you not to go look for them?

Looking is fine. Disclosing in a way that's likely to cause harm that could otherwise be minimized with a little care... not so much.

This is probably where we diverge. From where I stand, "end users" are incapable of making a meaningful decision about security at this level. It would be awesome if they weren't, and god knows I have spent a decent amount of time in my life trying to bootstrap people into such a position, but it doesn't...like...work. There is a computing priesthood, as much as we have tried to democratize this stuff, and it's all goddamn nonsense to those outside of it. The set of people I know who do not actively work in tech and can make meaningful decisions about the technology they work with is...my girlfriend, probably. Can't really think of anyone else who isn't reliant on "do this" the advice of others, whether it's correct or not.

Continued education to help end users get to the point where they can make meaningful and educated decisions is great, and should be pursued, and I do it where I can (though most of the time there's just a shrug and a "whatever"). But, barring that, somebody's gotta make choices on their behalf, and there's a Jerry Garcia quote for this one, you know? With great power comes great responsibility, and we gave ourselves that power. And, outside of a security context, this is why I unflinchingly come down on people who work for shit companies that hurt people, why I'd never hire someone who worked for, say, a toolbar vendor in the 90's/00's and why I have fired clients before when I discovered they were doing shitty things with data gleaned from people who trust them: because we have ethical responsibilities to the people downstream of us who are ill-equipped to make meaningful, educated decisions. I can't compel anyone to do as I do--but I can say that one should, because it's decent.

I can't agree that the power switch is a reasonable mitigation in 2018. In the nineties, sure, but too much of life revolves around this garbage we invented and keep mostly creaking along. (Should it? Probably not. Does it? Yeah.) We are on a ratchet, we can't go back, and kicking the decision down to people who literally-literally lack the tools to make a wise decision while painting a target on them for bad actors who can take advantage of them is profoundly disturbing to me.

This particular vulnerability is a post-compromise privilege escalation flaw, yes. But it strikes me that the conversation must be bigger than that, because the same arguments are used for both. This? Low stakes. Heartbleed? Incalculably high stakes. But the same argument could/would (if it were found by shitheads rather than people with a certain amount of decency to them) be used for the latter instead of the former, and that's what makes me itch.

(And to be clear, irrespective of this conversation, you know I am a big fan.)

So the 11 billion dollar vendor who shipped vulnerabilities in the first place gets to treat these problems as an externality, but 4 dudes in a basement who did a basic research project have to be restrained from speaking?

I don't see how you get there from here.

I don't get how you get to me thinking the vendor gets to treat these problems as an externality? I am all in favor of slagging vendors who release buggy shit. For hardware (and some software) manufacturers I'd be in favor of significant legal remedies available to people who purchase hardware later found to contain security vulnerabilities.

But I think that should be done after mitigations are in place to protect end users, or if the vendor is not taking good-faith steps to mitigate the problem.

And I am not saying one should be "restrained from speaking" at all. I am saying that choosing to do so makes one an asshole, and that decent people should strive to not be assholes.

I don't understand the chronology you're working from. The timeline here shouldn't start from "when the independent researchers find something in their basement". It should, rather, start from "when the first MRD for the product is sent from the PM to the development team". That's when the clock starts ticking on mitigation. AMD had years.

An eye for an eye works only until everyone is blind.

You seem to have several deeply misguided premises.

1. We don't know ARM knowingly shipped these chips although they were vulnerable. Bugs happen.

2. Even if this was the case, an individual can show, and ought to, show decency and empathy towards others.

3. This last comment of yours is a straw man and I doubt you are incapable of seeing this. You parent's argument was much more nuanced and elaborate than your rebuttal.

I don't think you understand the dynamics here. I don't think anyone knowingly shipped vulnerabilities. That's an impossibly low bar: all you have to do to "not know" is to not spend any money on security verification. The complaint here is that AMD was outdone on verification by 4 dudes in a basement.

I think saying that they were outdone by 4 dudes in a basement is being intellectually dishonest. There are a lot of dudes in a lot of basements looking for vulnerabilities all the time. Those four happened to find it, but there were hundreds of others looking. There’s no amount of money that amd can spend that would make them not outgunned eventually by all the hackers and intelligence services and security researches looking to break it.

Why do you assume that there were hundreds of other people looking for these vulnerabilities? Chances are, when we learn the technical details, we're going to find out that they're bog-standard memory corruption flaws in driver code, and that the thing that prevented anyone from discovering them was that nobody looked for them.

You honesty think nobody was looking for security vulnerabilities?

In drivers for an AMD security feature almost nobody uses? Yes.

Have you ever worked with a code base before? Even when you scrutinize for bugs, they still can go unspotted. Sometimes hundreds of people can look at the same code and not see anything wrong with it. Software has the benefit of having higher levels of abstraction, I haven't designed any hardware but as far as I'm aware it's not easy to abstract it. That will make it much harder to find things. While 4 guys in a basement may have found this vulnerability, it doesn't mean they will find every vulnerability or that anyone else would have this as they had. Throwing money at verification will not make it fool proof.

I've been a professional software developer since 1995.

> If there was working Usenet search in 2018, you could find me making approximately the same argument back in the 1990s, when I worked as a researcher at SNI, the world's first commercial vulnerability research lab.

This being a controversial topic straight at the intersection of technology, the way it changed and affected society, the public good and our dependence on technology, I really don't think that "I haven't changed my mind about this in 28 years" supports your argument ...

And honestly I would say that whether I agree or not.

I wasn't working in security but I definitely moved my opinion on the matter. In the (late) 90s I was mostly for full public disclosure arguing the same "we're better off when we have the most information available to us". But today I'm leaning way more towards "responsible disclosure is good" (as you can tell I'm also not 100% black-and-white on the matter like you said you are).

Maybe it's because I was younger then and had more of a reckless mentality and an innocent belief that people will make the right choices given enough information.

Maybe it's because in the past 28 years technology has changed our society to such an extent that impact of security vulnerabilities is rather incomparable to the impact they had back then.

Maybe it's because I definitely don't believe that you can defend this opinion with the very same arguments that were used back then without even addressing the spread of information technology and the drastic way they altered society in the past 28 years.

Maybe it's because I now realise that I myself am not always better off with more information if I can't act on it, and therefore it's not reasonable to assume it as a general rule. Which is very much something I had yet to learn 28 years ago, had to swallow some pride. I wish everybody was a clever as I was back then ...

>Almost every end user has at least one last-resort mitigation for any vulnerability: the power switch.

So if a hospital runs a life support on a vulnerable chip, they should just hit the power switch until it's fixed.

Or what about a computer controlling a nuclear power plant? An airplane? Spacecraft or Satellite?

Vulnerabilities don't restrict themselves to equipment that is non-essential for people to survive or would cost millions to replace in consequence of a hack or shutdown (please try to revive a sat after you did a full shutdown, I will be awaiting your report on how you'll align the antenna)

> you are a better person than that

I don't think this is an appropriate way to argue. Sounds like if he disagrees with you, he is somehow below standard.

> It's just...minimal decency, to care about other people.

Alerting folks to the danger that they face is one way to do so. Responsible Disclosure is caring about the vendor, whereas full disclosure gives other people the chance to take action on their own to remove themselves from harm.

As another note, why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.

> Responsible Disclosure is caring about the vendor, whereas full disclosure gives other people the chance to take action on their own to remove themselves from harm.

That is true, but you missed the other side of the argument. Coordinated disclosure is preferable also to a part of users/customers. Significant part of them have no understanding or incentive enough to mitigate on their own. So the question the discoverer of a bug then faces is 'how much headstart should I give the vendor and the users that depend on the vendor, before I make this public'? This has no universal answer, it may depend on how long the bug is out there and what kind of users may be harmed. But it is easy to see that a little headstart in terms of weeks is more reasonable than headstart=0, especially for bugs that are out there for years.

> why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.

Flaws are not always due to cutting corners. Some bugs in computers are very unintuitive and it could be years before they manifest. More responsible development seems like a good idea, but again, this ignores the other part of the problem - major group of users do not understand the intricacies of development and are not willing to buy more 'responsible' product, if it is 5years behind the newest trend and costs 5x as much.

What about the flaws that aren't unintuitive? What about the bog standard integer overflows vendors routinely leave in code because they won't pay what it costs to ensure they don't ship them?

So let me get this straight: are you arguing that because some portion of bugs each year is due to vendor negligence, it is OK for us security researchers to make the vulnerabilities public and expose users dependent on the vendor any time we want?

Obviously, yes. Your "some portion of" should read "virtually all".

I answered your question. But you didn't answer my question.

What about the flaws that aren't unintuitive? What about the bog standard integer overflows vendors routinely leave in code because they won't pay what it costs to ensure they don't ship them?

How does that matter? The only thing that matters is the harm that certain types of disclosures will do to average users. It doesn't matter whether a bug could have easily been found before release or not; the bug is there, in the wild, in a position to harm users.

By all means, vendors should be taken to task, and be beaten up even more when a bug was easily avoidable. But a bug's stupidity is completely unrelated to how a user might be harmed by an "irresponsible" disclosure. Giving the vendor their just desserts is secondary to that.

> The only thing that matters is the harm that certain types of disclosures will do to average users

I disagree. This does not account for the fact that malicious actors are likely to exploit these before the vendor fixes them on a schedule that they would prefer to dictate. And all users are not incapable of making alternative judgments about the use of vulnerable technology. Users include my Mom, hackers at small companies, giant corporations who are capable of overnight turning off SMB V1.

The harm to users comes from vulnerable software that the vendors put there in the first place.

> I do not need to be a security researcher to understand that they, as with everyone else, have an obligation to the body politic to not be a dick

So are you talking about AMD being dicks by releasing buggy chips, or the researchers somehow being dicks for finding out?

Related question: if a "food security researcher" discovered a vendor was selling contaminated produce - would it be reasonable for them to give the vendor 90 days notice before telling the public?

While I think it's reasonable and appropriate professional practice for _some people/teams_ to go down the "coordinated disclosure" path (I think the world is a better place for having Tavis Ormandy disclose the way he chooses to), it does without doubt benefit the company who's products are flawed more than the researcher or the public. Anybody who knows they work at a firm that's going to be described dismissively like AMD here did "This company was previously unknown to AMD" is quite likely correct to publish-and-be-damned, because you can bet there's a non-zero chance that AMD's response to non-public disclosure is going to include either stonewalling and stringing the problem out as long as possible, or lawyering up ad threatening to sue the "previously unknown to AMD" company into oblivion.

If you don't want public disclosure of security flaws about your products, either don't make flawed products or don't ship them to the public. Especially if some of the key selling features of said product include bullet points like "AMD Secure OS".

> Related question: if a "food security researcher" discovered a vendor was selling contaminated produce - would it be reasonable for them to give the vendor 90 days notice before telling the public?

This example is absolutely farcical. It's not even close to the same thing and you know it. A security flaw is not equivalent to poisoned food - it still requires outside action to be exploited.

> about AMD being dicks by releasing buggy chips

Everybody releasing chips releases buggy chips. It's the current reality of both hardware and software. Unless they do it maliciously, they're not dicks.

Does everyone who releases drivers release buggy drivers?

Close to 100% of software has bugs. Almost all drivers have bugs. Anything that prioritises company profit and release dates over complete correctness in sectors where bugs == deaths, will have bugs. (And even those sectors are not magically immune) So yes - I expect they do.

So are vendors who release buggy drivers not "dicks" for the same reason that chip manufacturers aren't?

Unless they released it maliciously, I don't hold it against them. And wouldn't call anyone a dick unless they planned to do something evil.

Exceptions: issue was known but got ignored due to release schedule, or security was never mentioned in the project and at no level was there any security consideration. But that's for specific management issues, not engineers or the vendor in general.

That's an incredibly low bar. All you have to do to meet is is not actively look for security vulnerabilities in your products.

What's important aren't really the bugs, bugs can be fixed. What's important is who is allowed to run, inspect, share, and modify the code. If only the copyright holder is allowed to do this, that's proprietary software and that's malicious. If a user's software freedom is respected so users can choose to fix it themselves, wait for another release, hire someone else to fix the code, or live with the bugs that's treating the user properly.

Everyone makes mistakes; it's more about how those mistakes are handled and if a user's control over their computer is respected.

>Independent researchers don't owe AMD a chance to address anything.

The goal of responsible disclosure windows have nothing to do with saving face for the company. The point is that it gives the company time to come out with a fix so that their customers aren't left with massive holes in the security of their systems.

That presumes the only response people have to vulnerabilities is to patch. But that's never the only response people have: people can trade availability for exposure. But because nobody in the industry wants to face a costly tradeoff like that, we pretend that we're stuck with the lowest common denominator response.

What about responsible disclosure ethics? Yeah they don't owe AMD anything but all AMD users lose - since they claimed there is virtually impossible for any security product to mitigate those vulnerabilities in their televised security vulnerability disclosure interview.


Responsible disclosure is an Orwellian term literally coined by vendors as a way to coerce researchers into adhering to vendor schedules and vendor PR plans.


Btw, your HN search result page links to all of the references that you THINK what the term "Responsible disclosure" means. Be it "coordinated disclosure" or whatever else, I don't care. But I don't think it's ethical to disclosure the security vulnerabilities to the wild without contacting the vendor and given them a timeline (should be MUCH LONGER than 24 hours) and the benefit of doubt first.

Hypothetically speaking, if you are researching vulnerabilities solely for the intent of money (because you can sell to them to 3rd parties or your side hedge fund business can profit from disclosures in the stock market) then shame on you, because you are doing the society a dis-service and gaining on everyone' losses. To me, you are as evil as those hacker who utilize them.

There's a decent counterargument - take it or leave it - that this kind of research is extremely difficult and expensive, and upon success, privately weaponizing it and/or selling it to organized crime or nation state-level actors is extremely attractive, and therefore, the ability to short the stock of a sloppy/insufficiently-careful HW vendor to fully or partially fund the research instead is legitimate in that it ultimately improves overall-societal welfare relative to those other alternatives.

Vendors who wish to discourage that behavior could offer comparably-large bug bounties instead. And, of course, make their products more secure in the first place.

Seriously, if some fly-by-night security outfit has managed to discover this, they're probably not the only ones.

They didn't blow full technical details on this exploit after 24 hours, they went public with a summary... one that's so high-level that many people are even doubting they exist. That's not exactly dumping a zero-day on the internet either.

There's a whole lot of shooting-the-messenger going on with this topic. Making plays against the stock is scummy and possibly illegal, but that doesn't make the exploits here any less real (assuming they are). These are actually quite serious breaks, potentially VMs can jump the sandbox straight into SMM mode and PSP, so it actually is much more severe than just "root password lets you do root things".


There is a long and storied history of showing the disadvantages of your competitor's products. Edison went on a campaign against Westinghouse's AC electricity, culminating in him electrocuting an elephant to death to demonstrate how dangerous it is.

Right now we need more spotlights on computer security than ever, and as long as it gets bugs patched (hardware, software, or firmware) I don't really care who's doing it or what their short-run motivations are. If AMD won't secure their code appropriately and Intel wants to call them out, fine. If Intel is leaking timings through sidechannels and AMD wants to call them out on it, fine.

And if we want to throw stones here, it was AMD who blew the embargo on Meltdown a week early because they wanted to force a response from Intel at CES... different in degree, not really in kind.

So you believer in the absolute freedom of security vulnerabilities disclosures and security researchers should just do so at will?

Do you know that the general public are usually the ultimate victims and impacted by those vulnerabilities the most?

Especially Intel/AMD are corporations worth tens of billions monopolies in their fields, and if their CPUs with zero days unpatched and sample code and exploitation techniques out in the wild, what else are you gonna use on your desktop computers?

We've seen similar happened for Microsoft after Shadow Brokers's disclosure.[1] It's gonna be worse for hardware products as it's virtually impossible to retroactively fix silicons chips.

[1]: https://www.wired.com/story/eternalblue-leaked-nsa-spy-tool-...

What happened after the Shadow Brokers? Are we using dogs now to detect the Terminators?

BTW, one of the twitter threads went into their disclosure philosophy.


It turns out it's exactly the "release a general idea to the public to light a fire under the vendor's ass, only release exact technical details to the people who need to know" that you might expect. They didn't dump a zero-day into public.

Let's use "coordinated disclosure" then - that is generally considered the preferred method these days.

> What about responsible disclosure ethics?

"Responsible" to whom? Terms like these indicate what side one takes, such as how one expands the term "DRM": digital rights management means taking the 1%/elite side favored by the publisher, the few in power. 'Digital restrictions management' highlights what's happening from the user's side, the 99%, the side of the many. Similarly with the harm to the users and the desire for freedom in the term "jailbreaking".

So, since we recognize the reporters owe AMD nothing, to whom are they "responsible"? Or what are they responsible for?

This phrase strikes me as useless except to try to foist a responsibility on people that they don't actually have and getting the relatively powerless to serve the interests of power -- users who can't inspect, edit, or share edited CPU microcode are somehow not acting responsibly if they don't give proprietors sufficient notice.

Where is the "responsible disclosure" for Intel when they refuse to let users fully control the signing keys used in the software that sees every network packet before the rest of the computer (for inbound network traffic) and before a packet leaves the computer (for outbound traffic)? The one-sidedness of it all sticks out like a sore thumb.

3. The vulnerabilities are minor, barely worse than normal expected behaviour; just enough to call them vulnerabilities. All these "exploits" consist of using ultra-privileged access (signed device drivers, or flashing the BIOS) for bad purposes.

In the white paper, many attacks are hypothetical and many phrases are vague and slippery, suggesting the "researchers" barely achieved execution of something, not real payloads.

I hope AMD invests the little money needed to fund this sort of PR campaign, er, research initiative, against Intel. The net result would be a greater awareness of the perils of "sponsored" science and of the poor state of PC security.

You're recapitulating an argument that Arrigo Triulzi posted on Twitter based on his reading of the CPS-Labs white paper. The white paper doesn't include technical information about the flaws.

Dan Guido and Trail of Bits got to read the actual report, and vouched for them as real vulnerabilities. The fact that there are vulnerabilities in signed drivers is a bad thing: it means that AMD shipped cryptographically signed versions of vulnerabilities. Arrigo's twitter thread implied that the use of signed code somehow mitigated the vulnerabilities, but the opposite thing is true.

Pwn2own, iOS jailbreaking, and Playstation hacking have shown time and time again that chaining up seemingly innocuous exploits to get to the stage where you can run a "minor exploit which requires ultra-privileged access" is definitely within the reach of bored/smart teenagers with no more motivation than a new laptop or gaining the ability to pirate or cheat at games...

Suggesting this is "just hypothetical" because "nobody is going to get physical access or code execution in a signed driver" is pretty shortsighted in my opinion...

When the enemy manages to install a signed driver or flash the BIOS, the difference between being 100% owned by design and being 105% owned because of this sort of vulnerability is the last thing to worry about.


3. The vulnerabilities are real, but their impact is being overstated because behind the security researchers is a financial firm hoping to make a buck on stock trades.

#3 would seem to me to be a bad development for your niche, if it became a popular business model.

(I have no horse in this race.)

I wonder if the unusually short disclosure process was related to their disclosure of related financial interests.

If it was, it’d seem that this research was in support of a financial play similar to how Muddy Waters shorted St. Jude Medical on the basis of insecure medical devices. That would appear to be a legitimate strategy, but if the market didn’t punish Intel for their processor vulnerabilities it seems likely they’d react similarly here and the research would fail to move the stock price in any significant way.

How about

3. The vulnerabilities are real, and something smells real fishy about the way they were released, including what appears to be 4 dudes in a basement.

Except that's not necessarily something to get "outraged" about, just something to keep an eye on while this story develops.

The only one I see shouting "this is an outrage!" appears to .. be made of .. straw?

2. Shouldn’t be news given that a couple of dozen Saudis outmaneuvered a superpower, taking out two skyscrapers and thousands of people. That’s also no excuse for the behavior in question.

I’ll add 3. It’s not all about the researchers and AMD, but the people who use AMD chips and deserve a modicum of protection and consideration. Unless there were exploits in the wild, the security of users seems not to have entered into this.

I'm not saying it was them, but I wouldn't be surprised if Intel was trying to recover its reputational damage by hiring ppl to heavily research breaks in AMD chips to even the reputational playing field. They're the ones that stand to gain the most from this legal but shady tactic and have been reportedly scared of losing their long held market dominance in desktops and servers. Iirc AMD wasn't vulnerable to Meltdown which I speculate changed the market calculus in ways detrimental to Intel that both companies would be well aware of.

Interestingly, you'll note that the researchers claim public interest as their reason for non-standard practices, but then later it is revealed you need admin privileges to exploit them. The rhetoric the researchers use is inflammatory and staged in a media savvy way like a PR campaign.

This is a totally evidence free assertion and I'm not an infosec person (and am therefore happy to be set straight by experts) but I'll be happy to crack open the popcorn if something interesting is revealed a few years down the line.

I wonder if this is could be Intel up to its old tricks again.

This is so fishy to me, it's so unprofessional of a security researcher. My tinfoil hat is already rattling...

> 1-day notice? Such aggressive wording without even the chance for AMD to address the concerns?

Are the reporting parties under any obligation to give AMD notice?

Behaving according to AMD's wishes is not an obligation. Businesses will be the first to tell you that agreements and laws form obligations, not what someone perceives as a nice thing to do.

If not, then you're reacting to a distraction, a detail that doesn't matter: how the corporate-friendly tech press is trying to shift blame away from the party that either sold CPUs with bugs in them (mistakes happen, and this is unfortunate) or distributed nonfree (proprietary, user-subjugating) software which also happens to contain insecurities (a malicious and unjust way to distribute software).


"you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports"

People here seems to be mentioning short sellers being connected to this research as if there's some sinister collusion going on.

This is the entire point of short selling, and SEC encourages this type of activism. It allows people who can provide expert knowledge to profit off a trade if it can reveal damaging and legitimate information about a company

For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte...

Without this incentive, why would anyone bother to reveal damaging info? You're placing your self as a target with no reward. The payment is the natural balance of the market.

So yes, this research firm is connected w a hedge fund, and they have a very vested interest. But that doesn't make their claim untrue

Having a financial incentive to mess up AMD might explain why they only gave 24 hours' warning, though.

It's also a huge incentive to overstate the severity. Their goal is to profit off the panic they can produce, so every statement they make is likely heavily biased in that direction.

That said, I don't mind that these "research" organizations exist. Only bothers me when they put the general public at risk (or attempt to) for their own gain.

The point is the counter balance the other side - companies have an incentive to overstate their upside and understate their risk.

Short sellers want the opposite. So they both present their best cases and let the public decide, much like how lawyers will defend their own clients to the last breath regardless of the amount of evidence against them

There is far far more incentive for AMD and its partners to understate the severity.

I disagree. AMD needs to maintain it's reputation over time. Short sellers make their profit over a few hours/days and don't care if they are proven wrong.

So, AMD has vastly more incentive to be accurate than short sellers.

Pity that vast incentive didn't seem to work out when they promoted all these chips as having "Firmware Trusted Platform Module", "Secure Encrypted Virtualization", "AMD Secure Processor", and "AMD Secure OS" as features.

AMDs incentive, like any corporation, is to maximise shareholder value. Same as any tiny little security research firm. If a research firm can maximise their profit buy discovering vulnerabilities and shorting stock before disclosing them, is that any ethically worse than a chip company rushing out flawed hardware with big flashy marketing bullet points claiming how secure they are?

(I'm not saying short-selling chip vendor stocks on the back of vulnerabilities is a way I'd choose to make a living, but surveillance capitalism doesn't seem an "ethically better" industry to work in either...)

CPU's have real value.

As to ethics that's mostly irrelevant to this discussion. Both sides could have ethical behavior, I am simply pointing out which side has the larger incentives to exaggerate. After all the stock could drop and a short seller could still lose money. They need the stock to drop a lot even over a minor issue.

While the dollar value of AMDs incentive is without doubt larger - the existential value of the smaller amount incentivising the researcher is likely more motivating...

I'd argue that vulnerability research has real value as well.

On an individual level, it's much less I'd think. Inciting a panic could be life changing amounts of money for the researchers, paid out by the hedge fund returns.

AMD isn't going to crash and burn over these flaws anymore than Intel (at 5 year high) did.

This is the crux of it. The short disclosure window could hurt 3rd parties unnecessarily.

Although I enjoy reading grandparent's counterpoint

More and more lately I'm leaning towards the, "responsible disclosure is a bunch of crap" camp. You have to be "in" to get the news. Even if you're "in" security people love to play info war power games and withhold things because it tickles their jimmies, etc. And don't forget, you're deliberately keeping a vulnerability secret from consumers during a long period where you have no idea who else knows about it. If I'm a "user" or 3rd party and there's a critical vuln in some system I depend on, I want to know that I shouldn't use it or that I should take extra caution or whatever rather than being clueless in all in the name of the vendor's image.

This is how the whole industry ran in the mid-1990s. There were secret vendor lists that the cool kids got to be on. If you didn't have the right friends, you were shut out. Vendors took their sweet time getting patches out, because their preferred customers were all read in and had workarounds in place. It was a shitty way to organize an industry, and it fell apart with Bugtraq and full-disclosure security.

It's sad to see people arguing for a return to those norms, especially since the rejection of them correlates with a renaissance in our understanding how to secure software.

It looks like the short notice in this case is not intended to force a timely fix, but to prevent it. They are hoping to cause as much of damage to the company as possible both directly and indirectly through its customers so they can profiteer from it.

I'd say that the intent makes this qualitatively different to what I'd consider legitimate disclosure.

The flip side to that is to ask whether AMD have been "profiteering" from their customers by deceiving them about the security of their products?

It's not like their marketing copy makes accurate claims like:

"We're reasonably sure our Firmware Trusted Platform Module is trustworthy, but we ran out of time to pentest it properly before we shipped it."


"Ryzen features Probably-Secure Encrypted Virtualization! Our interns couldn't break it in a afternoon of trying! The data looks random enough to us..."

How much does "the intent" of their marketing copy and claims come into play?

> It's sad to see people arguing for a return to those norms

Where do you see anyone arguing for that? Or is it just a strawman? What I see is not people arguing against disclosure but people arguing for disclosure with an embargo longer than a day. You're going to have a hard time proving that one day is a norm, or that it correlates with a renaissance in securing software. Your response looks much more like circling the wagons when a member of your tribe is criticized.

I agree, but I am curious if you have any suggestions on how we should be handling disclosure?

If some security researchers are currently choosing immediate highly publicised disclosure and short selling because it's the most profitable path for them - perhaps companies should reconsider their default/expected response to vendor-privileged-disclosure?

It's not like AMD set their chip prices based on "ethics" or "duty to the public". As "the public" I'd prefer a Ryzen 1900X to sell for $150 rather than $500 - It's just a bunch of sand after all (plus some intellectual effort). I don't think AMD get to choose their pricing model but then complain about how security companies price/sell their intellectual work...

Don't sue people if they publish vulnerabilities without any notification to the vendor, as long as they never overstepped and exploited it themselves.

For what it’s worth, this is a fierce debate that goes back decades. There is widespread disagreement among professionals in the field.

But what if we give the list a really cool name like gazorpazorp?

> Having a financial incentive to mess up AMD might explain why they only gave 24 hours' warning, though.

A good way for companies to prevent this is to have a generous bug bounty program. Money is still transferred from the shareholders to the researchers, but then the company can impose conditions like delaying public disclosure for a reasonable time to prepare a fix.

If it's actually someone attempting to make money on a short or to benefit from a working relationship with a competitor, then a bug bounty program does nothing. No one can run a bounty program that pays out anywhere near as much as the information is actually worth to an adversary. Bug bounties work to engender a bit of good will among researchers and to provide some incentive to an otherwise neutral party to play ball. They don't mean shit to a hedge fund or a competitor in a multi-billion dollar industry.

Not unless the bounties are large enough to attract the attention of a hedge fund.

> Not unless the bounties are large enough to attract the attention of a hedge fund.

Which they should be if the alternative is a much larger loss to the company's share value. The shareholders come out ahead to pay five million on a bug bounty if the alternative is to lose a billion dollars in market cap.

I'm not a finance expert, but my very lay person understanding of how financial markets work tells me that those would have to be some rather huge bounties. See e.g., the effect on Intel from earlier this year:


It's not their problem. There's no obligation for them to give any warning at all. They can just go public, short the stock, and watch it fall. The warning is just a polite thing to do

Not even "polite" really, it's benefitting the vendors bottom line at the expense of the researchers.

Do you think there's _any_ chance AMD would have offered these guys money in the sort of magnitude they stand to gain short selling AMD?

I'm pretty sure if they'd asked AMD would have responded with a blackmail lawsuit instantly.

That's fine, but it doesn't change the fact that the possibility (likelyhood?) of financial gain affects the authors credibility. Especially since it is already strained by other issues with this disclosure.

It seems to me that disclosing vulnerabilities is in a different category from disclosing fraud. In the latter case, the only entities that suffer materially is the fraudulent organization and its investors, in the former you have the additional potential to expose all users of the vulnerable software to risk.

From "Viceroy Research":

>We believe AMD is worth $0.00 and will have no choice but to file for Chapter 11 (Bankruptcy) in order to effectively deal with the repercussions of recent discoveries.

Direct quote from: https://viceroyresearch.files.wordpress.com/2018/03/amd-the-...

These guys are slimy as hell, this is disgusting.

At what point does it go from being legal (utilizing information that anyone could have discovered with enough time and effort, whether through short sale or investment) to illegal (stock manipulation through rumor or innuendo)? This qualifies in my eyes, but it's probably hard to prove when one is attached to the other. I agree, it does feel slimy.

Mentioned in another comment, but from their management page: http://www.cts-labs.com/management-team

> He [Yaron, CFO] is also the founder and Managing Director of NineWells Capital, a hedge fund that invests in public equities internationally.

I wonder how linked the companies are - is this basically a vulnerability research company as a research arm of a hedge fund?

It sure seems that way. It wouldn't be the first; look, for instance, at Justine Bone's MedSec.

There was also Mark Cuban's Sharesleuth: https://www.wired.com/2007/09/mf-sharesleuth/

This is too well organized and presented. My guess is that this has to be financed in some part by a group of short-sellers.

They made a rookie mistake though - AMD is plagued by day-traders and algorithms who couldn't give a damn about the fundamentals.

Boy the future of capital markets is looking grim.

> They made a rookie mistake though - AMD is plagued by day-traders and algorithms who couldn't give a damn about the fundamentals.

Seriously. AMD stock is trading up 3+% at the time of my comment, and it's climbed since the disclosures this morning.

Something tells me this backfired.

Discl: I've been long AMD for a long frickin' time.

In the long term, the market is a weighing machine. Time will tell.

A new twist on an old game. I hear people ask why short-selling exists, but’s a good check against corruption but prone to it’s own abuses. Citron Research (a short-sell shop) is a good example of this— they savaged companies like NQ Mobile, Lumber Liquidators, etc. and make a bundle doing it.

The security angle is a fascinating and concerning new development, however. That said it may encourage more secure practices (as opposed to theater) through the hardware/software lifecycle in response to serious fundamental design problems.

It will also serve to increase the premium on 0days...

> It will also serve to increase the premium on 0days...

I strongly doubt that. I've seen incredibly serious vulnerabilities I've reported firsthand have little to no impact on a company's valuation when publicized.

But did you create an entire website about the vulnerability, including graphics and headline-friendly names, as well as sending out briefings to major media outlets ahead of the disclosure? Because that's what this group did

Admittedly no, but considering AMD is up ~3.85% as of this writing, I'm not sure I'd have benefitted from doing so.

Just look at what Citron Research did to Shopify last year. They tanked the stock from $120 to $93 just based on false accusations that they put out in a "report".

Now Shopify is now closer to $150...so their plan worked.

> just based on false accusations that they put out in a "report".

If it's false information, isn't that classic stock manipulation? I thought for it to be legal to make money on the stock it had to be both accurate and publicly available (if potentially hard to put together)?

They make claims that are demonstrably...stupid. I don't know if there's a better, more nuanced word to use here. It's trolling in broad daylight from what I can tell.

Watch the video and see for yourself: http://citronresearch.com/citron-exposes-the-dark-side-of-sh...

That video by itself tanked the stock for many many weeks, until they finally reported quarterly results and it started climbing again.

I'm glad the CEO didn't feed the trolls by acknowledging this report in any depth.

Also shows how irrational the stock market is in the short term.

Do you know where the line is between what e.g. Citron Research is doing and what is considered slander? (I assume they walk a very thin line in order to not get sued)

Their Shopify video [1] for example is not the typical „research report“ with lots of specifics but more of a personal opinion with rather broad accusations.

[1] http://citronresearch.com/citron-exposes-the-dark-side-of-sh...

Citron Research? Total hack and the premise that they provide the market a value is a stretch at best. Sometimes right and lots of times incredibly wrong but makes money on investors panicking immediately.

I agree on the premise of moving the market but they don't necessarily need to be only short sellers, they could have hedged both ways and still made money.

They could have exercised puts if it went down (which it did in the morning) or bought stock/calls both before the site release and in the case of it going down because they knew it wouldn't be a concern or dispelled by AMD.

Unless, this is truly a flaw and in that case, they can still buy more puts and just wait for AMDs official response.

What if I told you you can lose money on straddles and bear spreads.

Agreed on the losing money part, in general, but in a stock as volatile as AMD, I feel like there is an opportunity for this type of action, may not be the case for others.

Also, as nothing has been verified about the report (from AMD), there is still the potential for this to move either way.

Great username BTW

I can vet the guys who published this.

This is legit, and they haven't published anything that can be used maliciously.

The security flaws aren't really the issue. The way they did it seems like they have an interest to manipulate the stock.

This is too well organized and presented.

For what?

My guess is that this has to be financed in some part by a group of short-sellers.

What evidence do you have of that other than 'too well presented'? It sounds like a conspiracy theory, not a guess.

AMD's stock was negative multiple times today ($11.38 on March 13, 2018 10AM,and at 12Noon on NASDAQ). Shorting the stock would be an obvious play. I have heard of people thinking about trading on security flaws in products but never seen it done in real life.

I've done it once or twice when I reported a vulnerability directly to a company and I knew they'd have to report it to downstream customers pretty quickly. I've also been in discussions for larger vulnerabilities with security-focused hedge funds such as Muddy Waters. Generally I'm weakly skeptical about profiting from it consistently. In particular, funds like Muddy Waters have a pretty high bar for the sort of vulnerability they're willing to work with. You need not only a severe vulnerability, but the right kind of vulnerability, so you know that it can't be swept under the rug.

That said, it's pretty striking to me how aggressive this disclosure is. It may be an attempt to narrow the window and increase the profitability of a short sell.

There is also some questionable research group involved in all of this: https://viceroyresearch.org/2018/03/13/amd-the-obituary/

It's not uncommon for short sellers to take a position first before releasing a report like this to drive the stock lower. Of course, there are legitimate groups that, in the past, have unearthed real issues and corporate misconduct, but there are also questionable groups that will release reports with little to no substance. This case certainly does looks dubious, but I'd like to see an assessment by reputable security expert.

yes and its $11.77 now (up 2%)

this is not in the same league but i recall AMD/INTC also traded up on the spectre/meltdown debate. a lot of insecure chips ironically leads to a lot of demand for new secure-er chips.

... And yet they give 24h notice.

Yeah, right, this is definitely not being used to affect the share price!

That’s... sort of ok? It’s not perfect, but it opens up another avenue to finance security audits besides selling exploits to intelligence services, attacking end-users (both worse), and collecting rewards from the companies (better).

i always find the importance of these disclaimers blown way out of proportion to their probable economic impact. AMD shares are -up- 2% right now, for a presumably negative piece of news. the stock market is a big and sometimes inscrutable place. but ethics likes to treat things as morally black and white.

Is this kind of language common in other security disclosures?

No, this is a first. Even MedSec was more coy than this.

Almost always these types of security incidents and breaches NEVER move stock prices negatively because frankly they don't impact business. $AMD is currently trading up 3.5% as of writing this. :-)

Doesn't seem to have a noticeable impact though, and based on the (lack of) impact of most previous security issues, I wouldn't have expected it either.

These guys are essentially more black hat than white hat

No they aren't. Aside from the inherent and obvious lack of nuance in that terminology, black hats do not report their vulnerabilities. They weaponize them and use them, or they sell them to criminal organizations.

Black hat isn't distinguished by failing to report vunlerabilities. It's distinguished by bad faith.

No, it's actually not. It's distinguished precisely by using a vulnerability with the intention to compromise others. You can't just redefine "black hat" to be whatever normative disagreement you have with how people choose to disclose vulnerabilities. That's entirely subjective.

This is what wikipedia says:

A black hat hacker (or black-hat hacker) is a hacker who "violates computer security for little reason beyond maliciousness or for personal gain"

The personal gain part certainly fits with short selling the stock.

Excellent, great citation! Now, precisely what did the security researchers hack for their own gain, and precisely which computer's security was violated?

If we can call them "hackers" just because they ostensibly compromised their own hardware or software as a proof of concept for the vulnerability research, does that mean that all of Google's Project Zero consists of hackers and black hats because they get paid (personal gain) by Google to find security vulnerabilities?

Project Zero practices responsible disclosure. They do not make money from the exploitation of the companies whose software/hardware they find flaws in. The difference is very stark and you are being deliberately obtuse.

> They do not make money from the exploitation of the companies whose software/hardware they find flaws in.

Right, and neither did these researchers.

In point of fact, no, the difference really isn't all that stark. It's a difference of degree, not category. You apparently have a problem with disclosing vulnerabilities without providing advanced notice to the vendor, and you consider it especially distasteful to do so if you're financially benefitting from that. But all of that still comprises vulnerability disclosure, which is categorically different from actively using a vulnerability to compromise users as part of a criminal enterprise.

We can go back and forth like this all day, because every time someone bends the definition of black hat to fit something they disagree with, I can form a counterpoint which is technically true but which no one is willing to call black hat behavior, like Google Project Zero. On the other hand, if we use the definition of black hats as criminals engaging in online fraud, augmented by security vulnerabilities, then of course Google Project Zero doesn't qualify. You're going to have a very difficult time broadening the scope of this terminology to suit your definition without accidentally including groups you don't want to be in the same bucket.

And that's precisely my point. If you broaden terms too much, like "black hat" to "stuff with computers in bad faith", we can just weasel in whatever satisfies the definition or agrees with our personal viewpoint. Black hat criminals do not engage in debatable behavior, because it's strictly illegal and directly profits at the expense of other people. At best, all you can do is formulate an abstract argument about people being harmed by rapid disclosure, but that actually comes down to a debate of disclosure guidelines, not a debate of activist investing.

No one defined "black hat". Just what authority do you think sets that? There is none. Black hat is not a standard to which people are scrutinized.

There is a reasonably accepted definition for what a "black hat" is. I don't particularly agree with conceptually bucketing people into black hats or white hats, but the paradigm has an existing meaning.

In any case, if we go by what you're saying, then anyone can define "black hat" to mean whatever they want, which means it's a meaningless and unproductive concept to throw around in conversation.

Your assertion is in a catch-22 here. Words have meaning without requiring an independent body to rigorously define them. The established definition of a black hat is someone who compromises other people using security failures for their own gain. If instead we choose to say that the term has no established definition, then the entire point is moot, because calling someone a "black hat" no longer means anything.

There is a "reasonably accepted" definition of black hat, by your reasoning, and it is: someone who uses computers in bad faith.

> There is a "reasonably accepted" definition of black hat, by your reasoning, and it is: someone who uses computers in bad faith.

Speaking as someone who 1) works in the security industry, 2) has managed corporate disclosure programs as an internal security engineer, 3) has run a security consulting firm working with many companies, and 4) has reported security vulnerabilities in disclosure programs; no, that's not the reasonably accepted definition. I can't think of any colleague I've ever worked with off the top of my head, nor any widely read security-focused periodical (like Krebs), who would use the term "black hat" for such a generalized disagreement of ethics.

I think the "security industry" has a delusional image of themselves and regard most of them as grey hats at best. An insider's opinion on what constitutes black hat is not particularly impressive to me. And this is not a generalized disagreement of ethics. Bad faith is has a specific meaning and you are unreasonably stretching it.

> I think the "security industry" has a delusional image of themselves and regard most of them as grey hats at best.

This criticism of the industry might hold more weight if you actually evidenced a willingness to use terminology according to its accepted usage, not as a tool to advance your ethical opinions.

> And this is not a generalized disagreement of ethics.

It actually is, because I strictly disagree that either of 1) trading on bad news, like security vulnerabilities, or 2) disclosing vulnerabilities without notifying the vendor are unethical. You're free to disagree! Your opinion is just as valid as mine; the thing is, we don't define words based on opinions, because then we'd never get anywhere, and we could label people we don't like whatever term we know other people don't like, even if we don't share the same definition of the term. By calling people who do either of #1 or #2 black hats, you're exercising rhetoric that puts them in with actual criminals, doing actual illegal things just because they are doing something you disagree with.

> Bad faith is has a specific meaning and you are unreasonably stretching it.

Okay. I guess I'm free to also call scientists working on whatever thing I disagree with pseudoscientists then, just because I find their work ethically unsettling. Better yet, I could call them criminals.

Words aren't defined by any authority. Their historical and present common uses however are documented by dictionaries et al. The most authoritative source on the term "black hat" is probably esr's jargon file: http://www.catb.org/jargon/html/B/black-hat.html

To save the click: "1. [common among security specialists] A cracker, someone bent on breaking into the system you are protecting."

Your (and hdyr's) looser version is not in common usage and in that sense is wrong.

>Words aren't defined by any authority

This is exactly my point. The Jargon file is pretty dated and imo the definition given there isn't really adequate.

My looser version is indeed in common usage. If nothing else 5 HN users seem to agree with my definition enough to upvote my initial comment on the matter.

black hats use them for bad, white hats use them for good.

ideological discussions about disclosure policy aside, if they are doing this to manipulate stock prices and in doing so create a situation where more actual exploits occur, I'd say that is 'black hat' behavior.. the 'weaponization' is in the 'social engineering' of the market reaction, rather than a direct exploit in this case..

The problem with your first line is that it leaves the definition of black hat open to interpretation, when that is not how the word is actually used in the security industry or in popular reporting. Black hat activity specifically refers to criminal activity, which we can demonstrably perceive and attribute. By your reasoning, I am free to call security researchers black hats if they don't give vendors advance notice. You might disagree with that, but you can't say I'm wrong without making a normative argument about whether or not something is ultimately unethical. There is no categorical difference between me choosing to call people black hats if I disagree with their behavior and you calling these researchers black hats because they're doubling as activist investors.

On the other hand, this entire sideshow is bypassed if we use the well-established definition for "black hat", which refers exclusively to illegal behavior involving security vulnerabilities and online fraud. More to the point, reporting facts is not "market manipulation" (which is also a well established term) even if you want it to be, and "social engineering" is not the same as publicizing information with the intent to move the markets. Using these words in the way you are is the same as flippantly redefining them as you go along, with the result that the conclusion is quite brittle. There could be a strong argument that the behavior is unethical, but using these terms as you are doesn't help that point along, it hampers it.

> Black hat activity specifically refers to criminal activity, which we can demonstrably perceive and attribute

stock manipulation is clearly criminal, if you want to take the 'letter of the law' approach..

beyond this, this gets into the same debate as letter of the law vs spirit of the law, which has both nothing and everything to do with this topic.. black hat is not 'defined exclusively' anywhere, and of course one leaning to a 'letter of the law' argument would then also look for 'exclusive definitions'

as to your point:

> free to call security researchers black hats if they don't give vendors advance notice.

if they are doing this for malicious purposes, yes

if it is for an ideological stance, then, well, it depends on how you view their ideology.

what happens if the law is incorrect?

again, letter of the law vs spirit of the law.

"normative argument about whether or not something is ultimately unethical"

laws are normative arguments about whether or not something is ultimately unethical.. not neutral 'things' that exist in a vacuum. and they can be correct or incorrect, and also incompletely defined..

how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?

Say for example, actively portscanning (legality nebulous) for already infected computers and then overcharging 2000% for cleanup? Then spamming virii from a jurisdiction where it is not illegal in order to grow this 'business'? All legal.. so it's "white hat?" or is it 'grey hat' because it is in a legal 'gray area'? I don't think that's what grey hat means either..

> laws are normative arguments about whether or not something is ultimately unethical

That wasn't the distinction I was making. A law is a positive statement. An argument of what should be lawful, or an interpretation of a law, is of course normative. But I already said that in this thread.

By the "letter of the law" (section 9(4)(a) of the SEC act and existing case law), stock manipulation involves promulgating outright falsehoods. Case law shows us that exemplary falsehoods have to be categorically untrue; a biased presentation of something that is true does not pass the bar. Being that there is a vulnerability here, the material we have to go on does not paint a favorable outlook on the researchers being indicted. Activist investors routinely present facts to the media with a clear agenda, but the SEC virtually never prosecutes them if there is an inarguable, material kernel of truth to their allegations. There's a vulnerability here. Reasonable people can disagree on the severity of the vulnerability and how it should have been disclosed. But it's not fraud.

> how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?

Your question has a presupposition; if the security researchers traded on their knowledge of this vulnerability, I find that to be neither unethical nor illegal stock manipulation.

> Your question has a presupposition

that it is specifically tied to this case.

I think some here believe that the weapon here is financial; to trade the stock.

I'm sure they believe that, but to be blunt, that changes the definition of "black hat" from "compromising people with security vulnerabilities" to "doing things I personally find unsavory when publicly disclosing security vulnerabilities."

If people want to bend over backwards to make an argument about the abstract way in which people are harmed by small disclosure windows, activist investing or information asymmetry in the market, they're free to do so. But none of those things qualifies as black hat behavior. Definitions require precision to be useful, and you throw all precision out the window if you decide to lump people with disclosure habits you dislike in with organized criminals stealing identities en masse.

If the term is flexible, why the hard reaction to my flexing of it?

I agree with the sibling commenters here. This is a bad faith, financially-motivated disclosure with insufficient time given to AMD to react

> If the term is flexible, why the hard reaction to my flexing of it?

The terminology is not flexible, it has a well established meaning. If your bar for a black hat includes legitimate security researchers disclosing vulnerabilities in a way you don't like, you've just expanded the group of people we can call "black hats" almost arbitrarily. You're putting security researchers you have a normative disagreement with into the same group of people who commit actual fraud, steal identities and sell your credit card data.

I found this on /r/AMD haha: https://i.imgur.com/OkWlIxA.jpg

"Although we have a good faith belief in our analysis and believe it to be objective and unbiased, you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports." from the disclaimer

Why does it say this on the disclaimer:

"...we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports."

Are they shorting AMD? https://amdflaws.com/disclaimer.html

It's quite unsettling that Linus thinks as much of security in general, given that he maintains a kernel and he's responsible for accepting its security modules that are next to unusable because of their complexity. Could his general disbelief lead to a (kind of) dismissive attitude in this respect? Keep in mind he's the one that would never properly disclose of a security fix - instead of saying which problem is fixed, the general approach is to just publish a new kernel minor release and say "some security bugs are fixed, go figure".

People who work in vulnerability research generally just point and laugh at him. His opinion on this doesn't matter.

And the people who focus on real security point and laugh at the so called "vulnerability research engineers", and agree with Linus point.

I'll bite. What's "real security"?

No, that isn’t a thing

24 hours means they don't deserve to be called security researchers. They're exploit creators. Given the material effect this would have on AMD's stock, one might also reasonably speculate about their financial interests.

One difference between security researchers and "exploit creators", which is a term I think you just made up, is that exploit creators presumably release exploits.

Don't tell HD Moore or the Metapsloit team about this, though. They may cry themselves to sleep tonight.

Creation and release are two different things. They have created the exploits, or else AMD wouldn't be taking them seriously. They have also contributed more to the re-creation of those exploits by others than they have to security. So you can quibble over whether others use the exact jargon that you would have, but that doesn't change the underlying reality.

Every security researcher creates exploits, so I'm not really sure what the distinction you're trying to make is.

This. Especially with the disclaimer that others have noted:

> "we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports"

You don't have to speculate. They admit having financial interests in the actual text of their report.

Vulnerability and Exploit are different.

Can you demonstrate a vulnerability without producing an exploit? You have to provide a poc to demonstrate it to others at least, no?

Two sides of the same coin

You can release the concept and description of a vulnerability without releasing an operational exploit.

If the vulnerabilities were real, I'd have no problem with a company using it to promote themselves, trade and talk their book, etc. The issue here is the vulnerabilities are very overhyped (some are fundamental things like "if you reflash your BIOS with evil, you're screwed", some just make local root access more persistent, etc.

The problem with something like TRO LLC is that markets don't move on security info.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact