Hacker News new | past | comments | ask | show | jobs | submit login
NSA Said to Have Used Heartbleed Bug for at Least Two Years (2014) (bloomberg.com)
285 points by ColanR 26 days ago | hide | past | web | favorite | 133 comments



This was never confirmed with any evidence and bloomberg only cites anonymous sources. Given Bloomberg's bad track record on infosec stories I say I have my doubts.

Of course I don't know whether or not the NSA knew about Heartbleed. But nothing that would even remotely qualify as evidence has ever been presented.


As someone who has designed shipping HDL including more SPI masters than I can count, and now works as a security researcher (among a couple other hats I wear aty current job), I really think Bloomberg got dragged through the mud.

Everything they said about the BMC was plausible. It was bizzare hearing about how such a scheme would literally break the laws of physics, when by accident (read shitty, undebugged HDL) I've caused exactly what they were claiming.


Like others have said, the pushback came from saying it was happening, not that it was plausible. As well as refusing to pull the story or clarify the details.

I also swear I had read articles months/years before that companies like Apple literally photographed motherboards before shipping and compared them after arriving to look out for hardware tampering in transit. That, to me, shows they are not only aware of issues like that, but taking meaningful steps to detect it.

Edit: Here's an article from 2016 about it https://www.businessinsider.com/apple-worried-about-spy-tech...


The hack supposedly happened in 2015. Apple being super concerned about server motherboard supply chain management in 2016 for unspecified reasons lines up with the timeline very well.


Federal buyers use companies like Harris Corp to sample and analyze devices and components for tampering or counterfeiting. Even then, bad stuff gets through.

Apple cares about supply chain integrity, but it’s not enough to stop this kind of threat.


> Everything they said about the BMC was plausible.

Not just plausible; something not altogether dissimilar was documented to have been happening by the Snowden documents back in 2014: https://www.engadget.com/2014/05/16/nsa-bugged-cisco-routers...

I'm good with Bloomberg being expected to do a better job covering this activity than they have. And, without further evidence that it's actually happening, I can't quite take the step into believing that it's probably happening, because that way lies tin foil hats.

But for all the security researchers that straight up claim that what Bloomberg reported was impossible, I wonder what their opinion would have been about reports that the NSA was bugging routing and server hardware in transit before 2014?


> because that way lies tin foil hats

I wonder why we're so collectively afraid of being labeled 'conspiracy theorists'. What is so wrong with supposing that bad things are being done intentionally?


It's not a matter of supposing that certain things may be happening, or even that they probably are. It's a matter of believing with certainty that they are, without concrete evidence and sometimes even when there's evidence to the contrary.

It's a pernicious bug in certain kinds of psychology that makes it quite hard for someone to tell the difference between nightmarish fantasy and reality. I don't want it.


> It's a pernicious bug in certain kinds of psychology that makes it quite hard for someone to tell the difference between nightmarish fantasy and reality. I don't want it.

When reality has repeatedly put nightmarish fantasy to shame - mostly for lack of imagination on fantasy's part - it's not unreasonable to question the line between optimism-laced skepticism and naivety.

People used to have dreams they aspired to. Now we have nightmares we want to see happen. I don't want it either, but apparently society at large does.


Well said. Of interest to me is why these beliefs? Without evidence, one is capable of believing anything to be true. So why believe conspiracy at all? What psychological and/or aesthetic need are these specific beliefs satisfying?


Because there is already endless concrete evidence that the elites and establishment are out to fleece the common person, and when someone has their bubble burst on this fact, they start looking everywhere for where they might be screwed.



it's also the whole idea of attack models: you don't wait for evidence to construct an attack model


> I can't quite take the step into believing that it's probably happening, because that way lies tin foil hats.

I think it was Wired that broke the story about AT&T's secret fiber-splitting rooms several years before they were later confirmed by Snowden's leaks. Given the entities and sheer amount of resources in play (or available to be used for that sort of thing), it's not nearly as tinfoil-hattish as, say, HAARP.


Bloomberg was rightfully dragged through the mud (IMHO), and like the parent I am immediately distrustful of any technical stories they put out. The issue was not that the BMC hack was implausible, but rather Bloomberg's refusal to supply solid evidence backing up their claims in the face of strong denials and perceived issues with the reporting.

A subset of the perceived issues with the reporting:

- How do the exploited servers phone home to China, when they were not connected to the open Internet? Not impossible, but it's asking for a lot of trust without more information. [0]

- One of the only named sources, Ryan Fitzpatrick, saying the details in their big hack article are identical to an example he constructed for the journalists to show that type of attack is plausible. The entire podcast is a great listen, but here is a direct quote: "In September when he asked me like, 'Okay, hey, we think it looks like a signal amplifier or a coupler. What’s a coupler? What does it look like?' […] I sent him a link to Mouser, a catalog where you can buy a 0.006 x 0.003 inch coupler. Turns out that’s the exact coupler in all the images in the story." [1]

- An accusation that the journalists who authored the Big Hack have had a previous story that made a big claim, they had many anonymous sources that back up their claims, but in the end there were extreme doubts of the veracity from people in the know. [2]

- Bloomberg sent another reporter, completely separate from the Big Hack article, in their tracks to discreetly talk to sources / involved parties to figure out the truth. [3]

Sources:

[0] https://daringfireball.net/2018/10/bloomberg_the_big_hack

[1] https://risky.biz/RB517_feature/

[2] https://threadreaderapp.com/thread/1049617855396933632.html

[3] https://www.washingtonpost.com/blogs/erik-wemple/wp/2018/11/...


The problem with what Bloomberg reported was not that it was implausible, but rather that it was unsubstantiated as to whether the attack actually occurred. If Bloomberg had narrated the article as "this could happen", attempting to explain a possible attack vector, that would have been fantastic.

I do agree there was a lot of "but SPI requires 6 wires and the slave can only respond when talked to" (treating SPI with the assumptions of a design engineer rather than an attacker), but that was ultimately just noise.


Agree.

Remember BadBIOS?

* https://en.wikipedia.org/wiki/BadBIOS

Same problem. EVERYTHING that Dragos Ruiu claimed is plausible, and it could be a great cyberpunk plot written by Neal Stephenson. But there is ZERO evidence that the malware actually exists.

And finding an actual incident in real life is much more important than theoretical possibilities. For example, almost everyone knows that it's very possible for semiconductor vendors to include a silicon-level backdoor since the 1980s, but finding an actual Intel/AMD chip with such backdoor (not ME, something like a secret instructions) is another matter.


ME has a debug mode that might be possible to enable with a signal sent through the 3.5mm jack on some laptops[1]. I'd be pretty concerned about ME bugs and backdoors disguised as ME bugs.

1. https://youtu.be/xUJQps2-VWk?t=301


I meant, finding a backdoor in its full form on the main system would be a much more significant find, and its impact and newsworthiness is greater than any hypothetical or baseless speculations, such as Bloomberg's BMC affair.

The impact of the BMC affair, if true, is showing real evidence and real demonstration that such an attack has happened, has been used in the wild, rather than showing that the attack is possible (we all know). Unfortunately, bad journalism at work.

P.S: I'm not saying that the ME subsystem, or buggy speculation (pun not intended) isn't a threat, just to make a point.

> I'd be pretty concerned about ME bugs and backdoors disguised as ME bugs.

Same consequences. I'd say they're effectively the same thing.


Zuckerberg’s famous MacBook with tape over web cam and microphone jack has a lot more context now.

I guess this says everything about corporations caring about security.


There's a difference between one guy chasing ghosts, and several sources anonymously saying "we found this and had to mitigate it".


Well, a year later, we still haven't seen the backdoor chip in question being taken to a lab or DEFCON yet... Even the photograph was fake, just a stock photo...

I was excited to read the news story, and it was a huge disappointment.


We also haven't seen the devices in the NSA's TAO catalog taken to a lab or showing up at DEFCON. That's half the point of targeted hardware attacks.

And yeah, it was a graphic that wasn't entirely accurate; welcome to print journalism.


To be fair to Bloomberg, some of the companies involved were also complicit with the NSA PRISM project and in the initial reporting of that they all denied giving the government a backdoor.

But you're right, it's been a year now and no further evidence has surfaced which seems odd.


prism is generally thought (by my understanding) to be snooping on lines between data centers without the complicity of the companies targeted


I thought PRISM was an endpoint where companies upload data in response to NSLs. Whether that data is being pulled by a human or by a computer is irrelevant, that data is getting pulled either way.


The original disclosure was about a targeted hack. What further evidence would you expect to appear?


Regarding the "spy chip" story: Other sources. Or other news agencies confirming the sources used. Or any evidence of the chips being planted on any hardware at all.

To be clear: I'm not saying it didn't happen, just that I'm skeptical about the validity and details of their story unless they have something to back it up with.


They had multiple sources.

And the story was that the targeted boards were either destroyed or handed off to the government, are you asking for someone to have probably risked jail time by holding onto one of them?


Just because it's hard to do in a way that ensures the safety of the sources doesn't justify any sense of trustworthiness.

In national security matters like this it's okay to be skeptical about reporting and sources because journalists have gotten in wrong before, probably because of how difficult it is to investigate without endangering the sources.


> are you asking for someone to have probably risked jail time by holding onto one of them?

You’re the first person to even propose such a thing


..no I'm not. They handed over the boards as part of an investigation. Withholding evidence in an investigation involving national security, along with all the false statements you'd need to make in order to make that happen is a great way to not see your family for a few years. I'm not the first to suggest this.


I mean, relying on multiple anonymous sources saying "this happened and this is how it happened" along with third parties validating the plausibility is a common acceptable standard for journalism.


I don't know if I'd thought about it that way. There is so much ambiguity and embellishment that goes into journalism in general, I don't view the output with any sort of authority. Applying the standard of security research, they didn't detail any evidence demonstrating their claim that it was actually being exploited [0]. It seems that difference in perspective is how Bloomberg thought their story was reasonable.

IIRC it was also heavily focused on Supermicro, without any distinction whether Supermicro was specifically targeted or just happened to supply the boards that were bugged and caught.

[0] eg showing a chip, or ideally the whole motherboard system. Does it actually rewrite instructions going by on MISO or was something else more practical? Parasitic energy harvesting? Inquiring minds want to know!!


Oh no, it isn't. Look, I get that you've seen enough to either know the shit is happening or believe it is based on what you've seen happen. I'm in the same boat. I'm just saying you're hurting your already-good credibility saying things like that.

No, it's not acceptable if it's a highly-controversial claim in an industry or topic that normally comes with proof of exploitation. They should've got it even if it was independent party vetting it that most would trust who wouldn't give too many details that would compromise an investigation. They could get money and/or advertising for doing the review. Otherwise, present it like it's information coming from anonymous, unvetted sources who could be full of shit.


Literally some of the most influential journalism of all time has been sourced from anonymous informants that weren't vetted by third parties. We didn't find out who deep throat was until over thirty years later, after he was in a nursing home with dementia.

Remember that anonymous doesn't mean unvetted.

And that leak was over literally millions of lives, so it's not like an intelligence arm of a Nation state doing its job in peace time is so more serious that the standards are higher.


Didn't Deep Throat's testimony activate a government response indicating it was probably true? Or did everyone involved act like they were full of it? Honestly, I can't remember what I read and I wasn't from that time either.

Anonymous certainly doesn't mean unvetted. We should have something come out of the stories if something big is going on, though. If we don't, we have no reason to believe them if the source has other screwups on their record.


I don’t know which is more likely: China leaning on Apple etal to hush up their spying attempts or USG feeding misinformation to Bloomberg to gin up the trade war. But it’s probably one or the other or both.


(the famous) Dan Farmer was alerting about problems with IPMI / BMC around the same time

http://www.fish2.com/ipmi/


[flagged]


This didn't happen, I have never DM'd you.


Not sure if I should downvote or upvote parent now :p


I love how well connected HN to tech world. Did you see that by chance or do you have some kind of an alert system set up ?


Well, a human alert system, a friend saw my name and sent me the link :)


I'm sorry for initially claiming that you DMed me a harassing ancient aliens meme in response to my comments regarding the Bloomberg hardware implant story.

I recall it being you, but it must not have been.


> It's odd how dismissive some supposedly-serious security researchers are of hardware implant capabilities.

And it really doesn’t take much effort to sneak even a bashbunny into an internal USB header - especially in the last mile of the supply-chain.

Get a temp-job as a UPS delivery driver in an area that services the datacenter of your target, whenever you deliver a sever box - open it up, add your implant, re-seal it all in the privacy of the back of your delivery truck, and that’s it.


Can you show us the messages? He’s saying that never happened.


Please avoid personal attacks and/or libel.


While I am skeptical of bloomberg, I'm even more skeptical of the NSA as they've burned all their goodwill in my eyes at this point. So I'm in the "probably true, if not then probably something nearly identical happened with a different serious vulnerability" camp.


What "goodwill" would be involved here? NSA is chartered to exploit things like Heartbleed.


They're also chartered to provide information assurance.


I'm not sure anyone really takes the IAD mission seriously. The actual purpose of the organization is to do offensive SIGINT.


They don't try to take information assurance seriously?

Are projects like SELinux[1], SE for Android[2], or the STM/PE[3] serious enough?

[1]: "Retrospective: 26 Years of Flexible MAC" https://www.youtube.com/watch?v=AKWFbxbsU3o

[2]: https://selinuxproject.org/page/SEforAndroid

[3]: https://www.cyberscoop.com/nsa-firmware-open-source-coreboot...


Oh, they requisition budget for the IAD mission, and they use it on IAD things. In reality, the most important thing NSA does is get budget allocated to itself! But does anyone believe that in a conflict between IAD and CNO/SIGINT, IAD has ever won?


I think they got some retroactive goodwill from the DES thing when it was discovered that they legitimately made it stronger.

Of course, they've more than squandered that by now, but it's not like they always completely ignored the IAD.


I mean, I don't really have any sympathy for them writing off half their mission, despite the impunity with which they do so.


Right, which is why any denial they make about this sort of thing is meaningless.


One of those goals benefits the people with power who are above the NSA. The other provides a benefit to the public at large that few will notice. Which goal do you think is likely to be top priority?


Well, in the case of heartbleed where first the NSA found it, and an independent researcher found it, and the DoD uses Linux and OpenSSL all over the place, you'd think that the information assurance side would be better represented. Who knows how many adversaries were using that as well before it was public (hence the whole point of responsible disclosure).

Edit: Like, stuff like cryptanalysis of SM4 is for sure on the table. I can even see their neat Diffie-Hellman hack that costs $100m per nonce. But a trivially remotely exploitable memory safety bug in software that runs large sections of the military? Like, come on.


To be fair, I bet the NSA knows better than anyone if any given exploit is being used in the wild.


Sure, if indeed the InfoSec arm is just for show and not the thrust of the organization, then they were chartered in such a way as to be incapable of cultivating goodwill, and incapable of existing in a just and free society.

And as such, the NSA (along with the CIA and perhaps, looking forward, the ONI, MIC, etc) are subject to deprecation.

In order for peace to come to earth in the information age, we must mature beyond a perceived need to have state agencies keeping secrets on the public dime and fomenting reasonable paranoia among the populous.


Well, yes, but the problem is that (unlike Dual EC-DRBG) other people can also exploit these things when open. For instance, I suppose, would the USA be better if Project Zero shipped all their stuff to the NSA and they both kept it quiet or would the USA be better if they fixed these things.

The point is to gain differential advantage. When you're the rich guy you don't want everyone's doors to be unlockable. When you're the poor guy you do. The USA is the rich guy.


The NSA has an interesting exhibit that talks about Heartbleed at their museum in Maryland (worth a visit if you're in the area—they have some Enigma machines which are a lot of fun). Of course I don't think the exhibit was there before the bug was publicly known.


“ Of course I don't think the exhibit was there before the bug was publicly known.”

Would that be the smoking gun?


I haven't been to the Cryptologic museum but I did get to see one of these Enigmas as it happened to be on loan to the American Computer Museum in Bozeman, MT when I visited there. If you like museums, don't miss this one, it was really awesome. Oh, and free.


Where in Maryland is this? Close to Ft. Meade?


Directly next to Ft Meade


I consider the likelihood that the NSA made a security audit of OpenSSL very high. And given the horror stories that we heard from people trying to fix it, heartbleed is probably not the only thing they found.


on the balance of probabilities, it is probably true. I would expect NSA to make use of any vulnerabilities they find, because their job is to hack others, not to keep us safe. Unfortunately.


NSA is resposible for so-called SIGINT and SIGSEC, acronyms for signals intelligence, to which you refer, and signals security, which IS to keep our communications safe.

It seems of course that SIGINT is what's "popular" in news.


I don't think it's only popular in news.

I work on a web platform team, and I've seen many vulnerability reports over the years (well over 100). I've never seen a report from the NSA or US government. Actually, the only government I've seen reports from are the UK, so credit to them for actually doing something to keep people secure. But most reports I see are from project zero or Chinese companies.

Either the US government doesn't care at all about browser security or they are keeping vulnerabilities for themselves.


No, the US government has taken the position that it’s always best to have a few tricks up your sleeve when the chips are down. It is most certainly intentional stock piling of zero days for strategic advantage.


French gov (CERT-FR and ANSII) made recommendations about heartbleed.


I would argue that such a clear conflict in these two priorities should necessitate having a bespoke separate governmental organization for SIGSEC, so that the NSA can freely focus on SIGINT.


What is the conflict in priorities that you mean?

I ask because I considered code breaking and code making complimentary, in the sense they debug one another somewhat.

My guess is that this chance to debug one another motivates their coresidence in a single agency.


The SIGINT arm of the NSA has an incentive to take any exploitable vulnerabilities in existing software and keep them secret, so they can use them against their enemies, rather than disclosing them so they can be fixed.


Disclosing vulnerabilities.


They should definitely disclose vulnerabilities in American made software for SIGSEC. If it’s foreign then not disclosing it would be SIGINT


If the software is foreign made and used in US, americans are still vulnerable. And opensource isn't American/national


The problem with that is there's a lot of software that's American made that is also used by potential targets of SIGINT.


Well the real question isn't if NSA used it but if they knew about it long before everyone else and never reported it.


That's how CNO exploitation works. They generally can't report; their adversaries are recording their own networks, and will retrospectively detect intrusions.


It's their SOP. See: eternal blue. https://en.m.wikipedia.org/wiki/EternalBlue


>Given Bloomberg's bad track record on infosec stories I say I have my doubts.

This is all speculation:

I've noticed a weird amount of ex-CIA find their way to that publication. I sometimes wonder if the China story was some kind of plant. So then the question becomes, do we think this is truthful propaganda or just propaganda?


If CIA has editorial control over Bloomberg, would be a great way to manipulate the market to fund black ops.


Control is a strong word. I think from my reading, they have journalists willing to listen who don't have the skills (or inclination) to sniff bs. The article seems to not differentiate between SSL and TLS for example, mentioning people breaking the former.


Right, hold the most clandestine organizations in the world to the same standards as petty theft. Do we really expect the NSA to be trailing HN in knowledge on zero-day vulnerabilities?


This is 2019 and the security community has yet to deliver a proper solution to prevent the existence of such bugs. The mismatch between programmer intent and code behavior is appalling. Sure, super smart coders can avoid the bugs, much like super safe drivers can avoid the shoulder, but rumble strips are there for a reason. The bug would not have arisen if the language supported dependent typing. See Agda, for example. One day...

https://en.wikipedia.org/wiki/Dependent_type


> security community has yet to deliver a proper solution

What do you want them to do? I think their solution is "use memory-safe languages like Golang or Rust (or even JavaScript/TypeScript) for new projects, not C or C++" and "use extensive fuzzing on legacy C code that hasn't been replaced yet".

Fuzzing was capable of finding Heartbleed, and it's advanced massively (and been set up at scale to continuously test open source projects) since then.


Rust is pretty useful against memory based bugs, and it is prudent that new projects take advantage of it. But this in no way fixes the headache of security, because (1) almost all of the existing infrastructure code depends on C (2) New classes of bugs will eventually emmerge, some partly unfixable by software solution.


>New classes of bugs will eventually emmerge, some partly unfixable by software solution.

Sure Rust, ADA and such don't remove all classes of bugs, but they can reduce the attack surface considerably, giving you more time to focus on the remaining security bugs.

And maybe people will invent software solutions that reduce the attack surface even more.

Assembly is a fast car with no security features. C is a sportscar with a seatbelt. Rust is a sportscar with a seatbelt, airbags, ABS, ESC and emergency breaking.

Of course you can still crash and die, it's just that you're less likely to do so.


> Assembly is a fast car with no security features. C is a sportscar with a seatbelt.

Given the ways you can trigger UB there is no difference between Assembly and C with regards to safety.


Just type "undefined behavior" here. I'm sure you have the time.


I know what you mean, however I think C did bring down the amount of bugs compared to assembly, just by making code easier to read and higher level abstractions available.

It's still not much though.


> What do you want them to do?

If you read to the end of the comment, it suggests dependent typing as the solution.


I'm sure the security community would love for these things to get fixed, but it's a constant fight trying to get developers to learn the correct way to do something. I've had developers tell my manager that having to fix security issues I report is putting them behind, so they wanted me to only be able to report security issues when they weren't working on anything else. Luckily my manager has my back and told them security is a top priority, even if it means fatures come out late.


The problem was a shotgun parser. LangSec 101. Use a formal parser.


One of the creators of LangSec has joined Darpa:

https://www.darpa.mil/staff/dr-sergey-bratus

And now Darpa is working on two related projects:

https://www.darpa.mil/attachments/SafeDocs%20ProposersDay-Fi...

1. SafeDocs - a document format made safe using LangSec.

2. Tools to help developer verify protocols using LangSec parsers, easily.

Would be interesting to see.

But still TCP/IP doesn't fit LangSec, if i remember correctly ?


If the languages supported dependent types and everything that happens in cutting edge PL research, wouldn't programming become super hard anyway for average programmers who have pressure to deliver?


Perhaps. But for for critical systems where security or reliability is a requirement, it's more important to get it right than to get it done quickly. Trying to rush these things is just asking for trouble.


This is more of computer science issue with languages insisting on being turing complete. I suppose governments could incentivize language designers to produce and support languages that eliminate ever larger classes of vulnerabilities by only buying software that is written in those kinds of languages.


Even with rumbling stripes before shoulder accidents still happen. And they will continue to happen, the same as security vulnerabilities in code.


> The NSA has issued a statement denying the report. In an email to Ars, NSA spokesperson Vanee VInes provided this official statement: “NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private-sector cybersecurity report. Reports that say otherwise are wrong.”

https://arstechnica.com/information-technology/2014/04/nsa-u...


Riiight, Snowden even said in Citizen Four that they claim their access is going dark but in reality it just keeps getting better and better.

Oh and don’t forget James Clapper, who TOLD CONGRESS that they weren’t collecting American’s phone records, or at least “not wittingly”.

Don’t mind if I tinfoil hat a little bit but I wouldn’t be surprised if “Intel” put that in there just for them.


Oh. Are we still taking NSA's statements at face value post-Snowden?


I'd rather the PR statement be added to the discussion, even if not all readers trust it.


They're at least as valuable as Bloomberg's anonymous sources. You can't choose which sources you trust more just because they agree with your pre-existing beliefs.


What's the legality of using a vulnerability to monitor someone if you have a warrant?

Seems like it would still be illegal.


Secret services are in the business of exactly doing illegal things on behalf of the state. I think there is pretty much nothing in the daily business of the CIA or the NSA that wouldn’t land an ordinary citizen in jail. Whether it is to convince foreign officials to leak information, or bribe them, or plot a coup, or assassinate enemies of the state, or intercept some communications, etc. This is no different from any other secret service in the world.

The NSA is no more guilty than a fox in the henhouse is guilty of being a fox.


Parallel construction can hide that illegal surveillance ever happened.


The answer to the question probably depends on unfettered access to all of the Executive Orders (and Decisions and Memos) and White House Counsel interpretations of existing law. We mere mortals will likely never know.


You mean like when FBI got warrants to exploit Tor users?


So far the state of the computer industry is pretty simple, if you're using american products, you're under american surveillance. Governments will always seize the monopoly of security, that's how civilization works.

And even if you're using open source, I'm sure the NSA has written tools that can scan source code to find vulnerabilities, and maybe generate exploit if they sprinkled some ML on it.

To be honest I'd rather have a government body have the monopoly of security than witness a cybersecurity chaos, which would quickly destroy the internet. The problem is that only the US does it well.


> The problem is that only the US does it well.

The main difference with other countries is that we all know about NSA thanks to what happened to people like Snowden, otherwise it would not be any different from other countries -> pure speculation. And why would the US do it better than countries like China, Russia or France?


Because the US has the silicon valley, historically it invented the internet and modern computers, the US holds most of the core tech companies (intel, AMD, microsoft, google).

The US just have much more expertise and engineers, which is essential if you want the NSA to recruit and be the best at what it does. It has many aspects, I guess cerebral and technical capital are important notions.

Even if other countries can compete with the US on cybersecurity, the US is holding most of the data but also is writing most of the software and designing everything around computers, so it makes it trivial for them to turn those products against other countries who buy them.

Except linux, I really don't see any computer product that doesn't have critical parts or system made in the US. And as I said I'm certain the NSA can exploit open source very easily since it's a problem with solutions: Torvalds said "given enough eyeballs, all bugs are shallow". That is true, but if NSA is supplying eyeballs to find vulnerabilities and use them at their advantage, linux will be an asset for the US.


> The US just have much more expertise and engineers

That's a pretty extraordinary claim that requires some evidence. I don't think that's true at all.


If someone is not able to read article because Bloomberg asks for premium fees, then don't miss out to check out their HTML ;)


I thought that we knew that they don't report bugs that they know about (because they don't care about the greater good).


The weird thing to me is that heartbleed depended on a feature in TLS that required the server to receive a request from the client specifying a size.

Is there any other similar TLS feature?

Even if this is not the only one, wouldn't you audit the shit out of that?


I don't fully trust Bloomberg, but there's funny thing: nobody hit them hard in court. So, logically, it means a part of their materials are true and evidences exists.


Excellent, US government putting my money to work!


To this day I think it was a deliberately placed vulnerability. I don't have any data to back it up, of course.


I sincerely, sincerely doubt it. Just look at the history of the actual bug (not hard at all to believe it could have happened by accident), and the fact of how undermanned OpenSSL was, and I'm just surprised it didn't happen sooner.


That's a serious and damning accusation to level against a volunteer contributor to an open source project, and in very bad taste if indeed you have no evidence (or even a reasoned narrative of the hows and whys).


in a sibling comment.

tl,dr: the first thing one does in this field is to sanitize the input.


I would also argue that processes and tools should supplement developer mistakes, negligence, or maliciousness. Unit tests, static analysis, fuzzing, integration tests, security audits, code reviews, principle of least privilege, etc. all have a part to play and yet this lapse in validation still managed to make it into production and infect all of the downstream libraries and applications.

I would argue that even if you could pin an accusation of negligence on the developer (I've not seen any evidence that could substantiate this accusation), it doesn't rest only with that one developer. The project itself lacked redundant checks. The downstream applications that import OpenSSL similarly failed to audit it.

I think in the whole scheme of things, the Open Source movement had a lot of momentum by the time that code was written, but the corporations that relied on the benefits of open source largely didn't contribute to paying to maintain highly secure coding practices. HeartBleed was one of the incidents that made the internet infrastructure/platform companies (among others) start paying for humans, tools, and reviews to help make these common libraries more secure. Google's Project Zero was started in July 2014, soon after HeartBleed was announced.


thank you.


Your problem is that absolutely anything can be explained by a conspiracy.

This case was a few guys writing software in their free time - 100% likely it was an honest mistake. See https://www.buzzfeed.com/chrisstokelwalker/the-internet-is-b...


What's funny about what you linked is a government demand led to the vulnerability at a point when owners or main people were thinking of walking away. A conspiracy around that would be more believable than most. Let's ignore that, though.

The thing is, they add vulnerabilities in a number of ways. They can do it directly with code. They can do it indirectly with standards hard to code correctly w/out vulnerabilities or side channels. There's lots of options. Whatever they do will usually look like a helpful contribution or useful requirement that went wrong in a way that leads to an attack. The better ones are those that look like common or inevitable errors. That's because obvious backdoors make folks run away from a project or supplier maybe forever on top of question who put it there. So, it's usually these flaws that look like obvious errors that still get the job done with everyone around defending the person that put them there.

And I'm not saying it was an NSA job. I have no idea. They've been doing too good of a job on most things for me to know. Could've been an accident. Even probability supports it's an accident just like it did all the times it was a subversion. At $200+ mil a year budget for backdoors/hacking, you bet there were a lot of accidents that, in non-TS version, had nothing to do with the NSA. ;)

Edit: There's lots of questionable things in this article. My favorite is this:

"And this group are the best of the best of the best."

The OpenBSD team doing LibreSSL had all kinds of summaries, live updates, and even presentations of what they found. It was about as far from that quote as you could imagine. Although my memory sucks, I think at one point they said there was even code that checked to see if endianness changed while it was operating. They at least had that covered. There were so many oddities about that codebase.


Exactly. I don't have any evidence to back up my suspicion. If the evidence exists, it exists in a locked safe somewhere at Ft Meade (or other place) and in the brains of a very few people.

I can feel what is known as code smell. So, let's develop a new feature in the most widely used security library. The very first thing that must be done is to sanitize network input. This is the first thing I would expect to be done by a seasoned developer. The lack of this check is suspicious. It could be an honest mistake, of course - we all make mistakes, and I am sure I've made my share of idiotic changes. But this isn't something I would expect about OpenSSL. I agree with @nickpsecurity, "many oddities".

Commit that introduced the vulnerability:

https://git.openssl.org/gitweb/?a=commit&h=4817504d069b4c508...

Fix:

https://github.com/openssl/openssl/commit/96db9023b881d7cd9f...


LibreSSL: More Than 30 Days Later https://www.openbsd.org/papers/eurobsdcon2014-libressl.html

Not pretty to read from a security perspective...

But, on the other hand, quite eye opening, if you want to have your eyes opened...


Thanks! I'll add it to book.arks for next time this comes up.


And your problem is that absolutely anything can be explained by malicious intent.


It sounds like you probably haven't worked on a lot of C code. OpenSSL is a giant security vulnerability pile. It is not at all hard to believe that someone has added a vulnerability by mistake: it would be more surprising to me if they hadn't.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: