Of course I don't know whether or not the NSA knew about Heartbleed. But nothing that would even remotely qualify as evidence has ever been presented.
Everything they said about the BMC was plausible. It was bizzare hearing about how such a scheme would literally break the laws of physics, when by accident (read shitty, undebugged HDL) I've caused exactly what they were claiming.
I also swear I had read articles months/years before that companies like Apple literally photographed motherboards before shipping and compared them after arriving to look out for hardware tampering in transit. That, to me, shows they are not only aware of issues like that, but taking meaningful steps to detect it.
Edit: Here's an article from 2016 about it https://www.businessinsider.com/apple-worried-about-spy-tech...
Apple cares about supply chain integrity, but it’s not enough to stop this kind of threat.
Not just plausible; something not altogether dissimilar was documented to have been happening by the Snowden documents back in 2014: https://www.engadget.com/2014/05/16/nsa-bugged-cisco-routers...
I'm good with Bloomberg being expected to do a better job covering this activity than they have. And, without further evidence that it's actually happening, I can't quite take the step into believing that it's probably happening, because that way lies tin foil hats.
But for all the security researchers that straight up claim that what Bloomberg reported was impossible, I wonder what their opinion would have been about reports that the NSA was bugging routing and server hardware in transit before 2014?
I wonder why we're so collectively afraid of being labeled 'conspiracy theorists'. What is so wrong with supposing that bad things are being done intentionally?
It's a pernicious bug in certain kinds of psychology that makes it quite hard for someone to tell the difference between nightmarish fantasy and reality. I don't want it.
When reality has repeatedly put nightmarish fantasy to shame - mostly for lack of imagination on fantasy's part - it's not unreasonable to question the line between optimism-laced skepticism and naivety.
People used to have dreams they aspired to. Now we have nightmares we want to see happen. I don't want it either, but apparently society at large does.
I think it was Wired that broke the story about AT&T's secret fiber-splitting rooms several years before they were later confirmed by Snowden's leaks. Given the entities and sheer amount of resources in play (or available to be used for that sort of thing), it's not nearly as tinfoil-hattish as, say, HAARP.
A subset of the perceived issues with the reporting:
- How do the exploited servers phone home to China, when they were not connected to the open Internet? Not impossible, but it's asking for a lot of trust without more information. 
- One of the only named sources, Ryan Fitzpatrick, saying the details in their big hack article are identical to an example he constructed for the journalists to show that type of attack is plausible. The entire podcast is a great listen, but here is a direct quote: "In September when he asked me like, 'Okay, hey, we think it looks like a signal amplifier or a coupler. What’s a coupler? What does it look like?' […] I sent him a link to Mouser, a catalog where you can buy a 0.006 x 0.003 inch coupler. Turns out that’s the exact coupler in all the images in the story." 
- An accusation that the journalists who authored the Big Hack have had a previous story that made a big claim, they had many anonymous sources that back up their claims, but in the end there were extreme doubts of the veracity from people in the know. 
- Bloomberg sent another reporter, completely separate from the Big Hack article, in their tracks to discreetly talk to sources / involved parties to figure out the truth. 
I do agree there was a lot of "but SPI requires 6 wires and the slave can only respond when talked to" (treating SPI with the assumptions of a design engineer rather than an attacker), but that was ultimately just noise.
Same problem. EVERYTHING that Dragos Ruiu claimed is plausible, and it could be a great cyberpunk plot written by Neal Stephenson. But there is ZERO evidence that the malware actually exists.
And finding an actual incident in real life is much more important than theoretical possibilities. For example, almost everyone knows that it's very possible for semiconductor vendors to include a silicon-level backdoor since the 1980s, but finding an actual Intel/AMD chip with such backdoor (not ME, something like a secret instructions) is another matter.
The impact of the BMC affair, if true, is showing real evidence and real demonstration that such an attack has happened, has been used in the wild, rather than showing that the attack is possible (we all know). Unfortunately, bad journalism at work.
P.S: I'm not saying that the ME subsystem, or buggy speculation (pun not intended) isn't a threat, just to make a point.
> I'd be pretty concerned about ME bugs and backdoors disguised as ME bugs.
Same consequences. I'd say they're effectively the same thing.
I guess this says everything about corporations caring about security.
I was excited to read the news story, and it was a huge disappointment.
And yeah, it was a graphic that wasn't entirely accurate; welcome to print journalism.
But you're right, it's been a year now and no further evidence has surfaced which seems odd.
To be clear: I'm not saying it didn't happen, just that I'm skeptical about the validity and details of their story unless they have something to back it up with.
And the story was that the targeted boards were either destroyed or handed off to the government, are you asking for someone to have probably risked jail time by holding onto one of them?
In national security matters like this it's okay to be skeptical about reporting and sources because journalists have gotten in wrong before, probably because of how difficult it is to investigate without endangering the sources.
You’re the first person to even propose such a thing
IIRC it was also heavily focused on Supermicro, without any distinction whether Supermicro was specifically targeted or just happened to supply the boards that were bugged and caught.
 eg showing a chip, or ideally the whole motherboard system. Does it actually rewrite instructions going by on MISO or was something else more practical? Parasitic energy harvesting? Inquiring minds want to know!!
No, it's not acceptable if it's a highly-controversial claim in an industry or topic that normally comes with proof of exploitation. They should've got it even if it was independent party vetting it that most would trust who wouldn't give too many details that would compromise an investigation. They could get money and/or advertising for doing the review. Otherwise, present it like it's information coming from anonymous, unvetted sources who could be full of shit.
Remember that anonymous doesn't mean unvetted.
And that leak was over literally millions of lives, so it's not like an intelligence arm of a Nation state doing its job in peace time is so more serious that the standards are higher.
Anonymous certainly doesn't mean unvetted. We should have something come out of the stories if something big is going on, though. If we don't, we have no reason to believe them if the source has other screwups on their record.
I recall it being you, but it must not have been.
And it really doesn’t take much effort to sneak even a bashbunny into an internal USB header - especially in the last mile of the supply-chain.
Get a temp-job as a UPS delivery driver in an area that services the datacenter of your target, whenever you deliver a sever box - open it up, add your implant, re-seal it all in the privacy of the back of your delivery truck, and that’s it.
Are projects like SELinux, SE for Android, or the STM/PE serious enough?
: "Retrospective: 26 Years of Flexible MAC" https://www.youtube.com/watch?v=AKWFbxbsU3o
Of course, they've more than squandered that by now, but it's not like they always completely ignored the IAD.
Edit: Like, stuff like cryptanalysis of SM4 is for sure on the table. I can even see their neat Diffie-Hellman hack that costs $100m per nonce. But a trivially remotely exploitable memory safety bug in software that runs large sections of the military? Like, come on.
And as such, the NSA (along with the CIA and perhaps, looking forward, the ONI, MIC, etc) are subject to deprecation.
In order for peace to come to earth in the information age, we must mature beyond a perceived need to have state agencies keeping secrets on the public dime and fomenting reasonable paranoia among the populous.
The point is to gain differential advantage. When you're the rich guy you don't want everyone's doors to be unlockable. When you're the poor guy you do. The USA is the rich guy.
Would that be the smoking gun?
It seems of course that SIGINT is what's "popular" in news.
I work on a web platform team, and I've seen many vulnerability reports over the years (well over 100). I've never seen a report from the NSA or US government. Actually, the only government I've seen reports from are the UK, so credit to them for actually doing something to keep people secure. But most reports I see are from project zero or Chinese companies.
Either the US government doesn't care at all about browser security or they are keeping vulnerabilities for themselves.
I ask because I considered code breaking and code making complimentary, in the sense they debug one another somewhat.
My guess is that this chance to debug one another motivates their coresidence in a single agency.
This is all speculation:
I've noticed a weird amount of ex-CIA find their way to that publication. I sometimes wonder if the China story was some kind of plant. So then the question becomes, do we think this is truthful propaganda or just propaganda?
Fuzzing was capable of finding Heartbleed, and it's advanced massively (and been set up at scale to continuously test open source projects) since then.
Sure Rust, ADA and such don't remove all classes of bugs, but they can reduce the attack surface considerably, giving you more time to focus on the remaining security bugs.
And maybe people will invent software solutions that reduce the attack surface even more.
Assembly is a fast car with no security features. C is a sportscar with a seatbelt. Rust is a sportscar with a seatbelt, airbags, ABS, ESC and emergency breaking.
Of course you can still crash and die, it's just that you're less likely to do so.
Given the ways you can trigger UB there is no difference between Assembly and C with regards to safety.
It's still not much though.
If you read to the end of the comment, it suggests dependent typing as the solution.
And now Darpa is working on two related projects:
1. SafeDocs - a document format made safe using LangSec.
2. Tools to help developer verify protocols using LangSec parsers, easily.
Would be interesting to see.
But still TCP/IP doesn't fit LangSec, if i remember correctly ?
Oh and don’t forget James Clapper, who TOLD CONGRESS that they weren’t collecting American’s phone records, or at least “not wittingly”.
Don’t mind if I tinfoil hat a little bit but I wouldn’t be surprised if “Intel” put that in there just for them.
Seems like it would still be illegal.
The NSA is no more guilty than a fox in the henhouse is guilty of being a fox.
And even if you're using open source, I'm sure the NSA has written tools that can scan source code to find vulnerabilities, and maybe generate exploit if they sprinkled some ML on it.
To be honest I'd rather have a government body have the monopoly of security than witness a cybersecurity chaos, which would quickly destroy the internet. The problem is that only the US does it well.
The main difference with other countries is that we all know about NSA thanks to what happened to people like Snowden, otherwise it would not be any different from other countries -> pure speculation. And why would the US do it better than countries like China, Russia or France?
The US just have much more expertise and engineers, which is essential if you want the NSA to recruit and be the best at what it does. It has many aspects, I guess cerebral and technical capital are important notions.
Even if other countries can compete with the US on cybersecurity, the US is holding most of the data but also is writing most of the software and designing everything around computers, so it makes it trivial for them to turn those products against other countries who buy them.
Except linux, I really don't see any computer product that doesn't have critical parts or system made in the US. And as I said I'm certain the NSA can exploit open source very easily since it's a problem with solutions: Torvalds said "given enough eyeballs, all bugs are shallow". That is true, but if NSA is supplying eyeballs to find vulnerabilities and use them at their advantage, linux will be an asset for the US.
That's a pretty extraordinary claim that requires some evidence. I don't think that's true at all.
Is there any other similar TLS feature?
Even if this is not the only one, wouldn't you audit the shit out of that?
tl,dr: the first thing one does in this field is to sanitize the input.
I would argue that even if you could pin an accusation of negligence on the developer (I've not seen any evidence that could substantiate this accusation), it doesn't rest only with that one developer. The project itself lacked redundant checks. The downstream applications that import OpenSSL similarly failed to audit it.
I think in the whole scheme of things, the Open Source movement had a lot of momentum by the time that code was written, but the corporations that relied on the benefits of open source largely didn't contribute to paying to maintain highly secure coding practices. HeartBleed was one of the incidents that made the internet infrastructure/platform companies (among others) start paying for humans, tools, and reviews to help make these common libraries more secure. Google's Project Zero was started in July 2014, soon after HeartBleed was announced.
This case was a few guys writing software in their free time - 100% likely it was an honest mistake. See https://www.buzzfeed.com/chrisstokelwalker/the-internet-is-b...
The thing is, they add vulnerabilities in a number of ways. They can do it directly with code. They can do it indirectly with standards hard to code correctly w/out vulnerabilities or side channels. There's lots of options. Whatever they do will usually look like a helpful contribution or useful requirement that went wrong in a way that leads to an attack. The better ones are those that look like common or inevitable errors. That's because obvious backdoors make folks run away from a project or supplier maybe forever on top of question who put it there. So, it's usually these flaws that look like obvious errors that still get the job done with everyone around defending the person that put them there.
And I'm not saying it was an NSA job. I have no idea. They've been doing too good of a job on most things for me to know. Could've been an accident. Even probability supports it's an accident just like it did all the times it was a subversion. At $200+ mil a year budget for backdoors/hacking, you bet there were a lot of accidents that, in non-TS version, had nothing to do with the NSA. ;)
Edit: There's lots of questionable things in this article. My favorite is this:
"And this group are the best of the best of the best."
The OpenBSD team doing LibreSSL had all kinds of summaries, live updates, and even presentations of what they found. It was about as far from that quote as you could imagine. Although my memory sucks, I think at one point they said there was even code that checked to see if endianness changed while it was operating. They at least had that covered. There were so many oddities about that codebase.
I can feel what is known as code smell. So, let's develop a new feature in the most widely used security library. The very first thing that must be done is to sanitize network input. This is the first thing I would expect to be done by a seasoned developer. The lack of this check is suspicious. It could be an honest mistake, of course - we all make mistakes, and I am sure I've made my share of idiotic changes. But this isn't something I would expect about OpenSSL. I agree with @nickpsecurity, "many oddities".
Commit that introduced the vulnerability:
Not pretty to read from a security perspective...
But, on the other hand, quite eye opening, if you want to have your eyes opened...