Hacker News new | past | comments | ask | show | jobs | submit login
Were Intelligence Agencies Using Heartbleed in November 2013? (eff.org)
290 points by things on Apr 10, 2014 | hide | past | web | favorite | 80 comments

I helped write this post. Note that we're very interested in anyone who has been keeping raw packet logs from before the Heartbleed vuln. was public. If you find 18 03 (01 | 02 | 03) 00 03 01 in them, please let me know or post pcap files. Contact info: https://www.eff.org/about/staff/yan-zhu

Are heartbeats typically visible in the raw traffic, or (after some point) do they wind up inside the secured stream?

(If the latter, this could be an unfortunate case where Perfect Forward Security, when enabled, also helps obscure exploits from later forensic discovery...)

It appears that you might be right, from the RFC:

"However, a HeartbeatRequest message SHOULD NOT be sent during handshakes. If a handshake is initiated while a HeartbeatRequest is still in flight, the sending peer MUST stop the DTLS retransmission timer for it. The receiving peer SHOULD discard the message silently, if it arrives during the handshake. In case of DTLS, HeartbeatRequest messages from older epochs SHOULD be discarded."

But that doesn't make sense to me because the PoC code didn't complete the handshake did it?

Edit: according to Google the reason is that OpenSSL does not honour the "SHOULD" part of the spec :/

In the case of the sample described in the post, there was a TLS handshake that was immediately terminated, followed by a client hello and the heartbeats. The client hello and heartbeats were sent in the clear.

I conjecture that the TLS handshake was used to fingerprint the server, since not all 3 versions of the payload will succeed on all TLS versions.

VRT (people who maintain a ruleset for Snort) published free rules for detecting Heartbleed attempts. If you read their blog post about it[0] (and the comments), the first 5 bytes of all the Heartbeat messages are unencrypted and you are able to detect the lookup within those first 5 bytes.

[0] http://vrt-blog.snort.org/2014/04/heartbleed-memory-disclosu...

I must admit to being suspicious about this. I would consider myself very very careful about password and other security issues because of various human rights projects I work on, yet on 16th March at very unusual but clever time for attempting such a thing against me (at the time I would have tried this, if I was targeting me and collected relevant pre-attack information) someone from the UK used my exact and recently changed password to login to my email service - traced back to a very unusual location for attempting such a thing. Luckily the service I use for low-level mail security noticed this strange login and blocked it.

It has puzzled me quite a bit as nothing like this has (knowingly occurred to me before) and I take a lot of precautions (which for obviously reasons I'm not going to go into) against keyloggers, malware, MITM, etc etc. With such target hardening I was very suspicious of how it occurred.

Ofcourse maybe I was sleep talking my passwords again :)

A similar thing happened to me. Someone was repeatedly trying to access a gmail account of mine, which is strange because that account had not been active for over 5 years. They supplied the correct credentials every time, and the IP originated from some small village in China. I had also recently changed my password, so I don't think it was merely a coincidence. It is possible that I have been keylogged for 10 years without knowing it, but the timing is uncanny.

Edit: Keylogged for the past 10 years without knowing it, across 5 different machines, with different architectures and operating systems. :-)


If you don't mind me asking the question I always ask people when helping with their security (both cyber and physical) and eliminating an element of potential paranoia:

Would your work/life make you a worthwhile legitimate target? (don't mean to sound rude but I guess it differentiates between random attacks and targeted ones)

Are there new forms of "legitimate" target.

Like are there algorithms that predict if XYZ could be a legitimate target in 15 yrs and if so, targeting starts now.

If you run the agency and you have this technology & its cheap and easy to execute, would you use this or would you wait for a target to become valuable before putting them on a list.

I say the above cause there is dearth of imagination which leads people to be very complacent on this topic.

How do you even know what a legit target is anymore after Snowden dropping docs they spied on charities and Jr sys admins.

Valid point. I guess what I meant by legitimate target was "do something they want to specifically know about enough to relatively targeted attack" (ie: analyst wants to know something) as opposed to everyone else who they just want to scoop us as much data about but is currently of less interest.

I would say after watching this almost everybody could be a target of state interest from VCs to a janitor with a cellphone who works at a network they want into http://youtu.be/3jQoAYRKqhg (FosDem2014 presention) and especially if you have any kind of trust in an open source community and your patches are accepted blindly.

Personally if I were an evil intel agency I'd be going after GPU developers and manufacturers at all costs to get at their firmware sources or even possibly find ways to sabotage it at the source. It's the final frontier of awesome evilware potential.

- the execution of GPU code, and transfer of data between device and host do not require admin privs so it will always run regardless of what the host system privilege settings are.

- Malware w/Nvidia GPUs can be statically linked with the CUDA library in a standalone hidden file that never touches the operating system.

- GPU memory is not shared with the CPU so encrypted malware can reside there undetected.

- Run-time polymorphism: malware GPU code can be re-encrypted with a new random key thus mutate in completely random ways that would be difficult to detect even if you dumped the GPU memory on a regular basis.

- GPU NSA code can easily access the screen framebuffer, and broadcast a live link of whatever somebody is doing.

- GPU NSA code present the user with a nothing is wrong desktop pretending the virus scanner is still running, hiding daemons, presenting false browser screens hiding the fact SSL certs have been rejected, all sorts of evil.

Does this mean they had your password, but didn't have access to your Authenticator? So gmail tells you when this happens?

Yes I presume so.

If you don't mind, when did this happen? Almost the exact same thing happened to me around the beginning of this year.

16th March

I Had a yahoo account I haven't touched in YEARS hacked a couple of months ago. Very suspicious.

<OT> Out of curiosity, since you consider yourself a possible target for security agencies, do that affect your choice of OS? Do you use commercial operating systems like Windows and Mac or just open source software? </OT>

Well this could end up being a whole essay...

I would say that generally it depends on the threat model and risk versus potential adversary (some African states would be different to China in capabilities etc). I think also a big part is really internalising and enforcing the idea that you will be breached at some point and acting accordingly, to minimise the data lose and be able to recover. So for example, if your only doing low risk stuff then the convenience of something like Windows/Mac and normal email service outweighs any risks from a breach (as long as you know how to access risk correctly and take the sensible/basic precautions). However for higher risk stuff then generally open-source solutions can potentially offer more security (too many to mention but tons of good ones here - Tails, TOR etc: https://prism-break.org/‎).

I would caveat though that I am not someone who always drink the open-source koolaid all of the time. I think the recent Heartbleed and suspicions about TrueCrypt goes to show that occasionally we are make assumptions about the security of these tools based on a false sense of security. Proper security is a longer process of peer review, code audits etc that everyone kind of just takes for granted are done with open-source projects.

Sometimes I think it's a little bit like the "free rider" problem, were everyone (including so many in relation to OpenSSL) is happy to use the tool but don't want to spend time/money/brainpower looking, adding, reviewing the underlying code. Then we get a lot of myths emerging (TrueCrypt is very safe for example - that might be the case but has it really been audited yet? Also, who is doing the auditing? etc).

Obviously the fact that open source stuff is free and the code can be looked at and eventually fixed is a very good thing. Also, usability is a security feature and I think some of the secure open-source alternatives have problems in this area - though the work of the Guardian Project, Whisper Systems etc is doing a great job of fixing this aspect of things.

Apologies if I got a bit sidetracked on this answer. If there is more specific advice you want, drop me a mail to the email (low risk stuff only) in my profile.

Tails is still using squeeze, refuses to patch their kernel with GrSec/PaX, and hasn't updated the release since March so a dozen known Debian security advisories are not patched including heartbleed. I wouldn't use it, it took them years to even add a macchanger on boot for wireless

Squeeze isn't vulnerable to heartbleed; the OpenSSL version is too old; heartbeat was introduced in 1.0.1, while Squeeze ships 0.9.8o.


Oops, the strange decisions at Tails saves the day.

I still don't understand their reluctance to patch grsecurity and/or pax. It's been on their roadmap for ages. Liberte Linux included it since their first release too bad the guy abandoned the project last year. Probably the same thing happened to Maxim as the creators of Anonym.OS they were hired to make secure builds for companies and stopped maintaining them.

Of course anybody here can roll their own live distro of BSD/Nix and just review both projects design documents to see the kinds of security decisions made to implement them as well but the vast public is stuck with Tails which keeps ballooning in size. They need a light version without tons of codecs and video editing software, collaborative editors or full office suites.

Maintaining a Linux distro is a lot of work, even if it's a variant of another distro. The more you diverge from the upstream, the more work it is.

Note that your examples of distros that adopted PaX/grsecurity are also your examples of distros that have been abandoned.

You need to pick your battles. Being based on Debian oldstable, which is still receiving security support while being a stable base to work on, is a fine decision. Not introducing a big change like grsecurity which will inevitably lead to having to tweak dozens of packages to continue to work properly (and thus have to maintain your own forks of said packages, without being able to easily pull security updates from upstream), is also a reasonable decision.

I invite you to create your own distro if you disagree. Please show me your up to date, privacy and security oriented distro; I'd love to see a comparison of how your style of maintainership will differ from that of Tails.

As an almost daily Tails user that closely follows its development and knows one of its developers, my impression is that the Tails team is seriously understaffed, which I believe explains all your concerns. There are only two persons that more or less regularly write code and prepare releases.

> hasn't updated the release since March so a dozen known Debian security advisories are not patched

It could be that none of the current vulnerabilities seriously affect Tails' stated use cases. Do they? We know that Heartbleed does not affect Tails. Only CVE-2014-2653 looks remotely relevant to me but I'm no expert. An out-of-schedule Tails release would steal a lot of development time which instead could be used to improve Tails permanently so they are not done unnecessarily.

> it took them years to even add a macchanger on boot for wireless

Judging by their design document it seems like a pretty big undertaking to do properly without causing a huge user support mess and giving a false sense of security. I for one understand if they chose to deprioritize it for some years in favor of other lower hanging fruit of similar importance. Personally I used to run macchanger manually when needed as the tool itself has been included in Tails for as long as I have been using it (four years).

> Oops, the strange decisions at Tails saves the day.

Since Debian Squeeze still receives security updates this does not seem very strange at all. I believe the "decision" is another consequence of their lack of man power. While slightly annoying at times I can live with out-dated packages that lack features I would like to have as long as they receive security upgrades.

> I still don't understand their reluctance to patch grsecurity and/or pax.

The Tails team has explicitly stated that they do not have the human resources available to afford maintaining all that themselves. Hopefully the Debian kernel team will get their shit together and provide a hardened kernel flavor some time soon.

> [...] the vast public is stuck with Tails which keeps ballooning in size. They need a light version without tons of codecs and video editing software, collaborative editors or full office suites.

One danger with a lightweight Tails distribution is that users are forced to mix data and activities with their normal, insecure OS, which potentially can hurt their anonymity and leave a data trail, thus countering Tails' two main points. Another danger is that the users get tired of Tails' inability to do basic, expected activities and stop using it completely, switching back to their normal, insecure OS for activities that they really should use Tails for.

That said, I do agree that fringe use cases like video editing should be removed since Tails' size is becoming a real concern. The "additional software" feature available when running Tails from a USB drive should be promoted more actively for such edge cases. Shipping less packages in Tails should also decrease the Tails team's maintenance burden so this actually looks like something worthwhile issuing a feature request for.

Okay, so I guess the answer to my question is: "I do use commercial operating systems".

The rest is common knowledge to the #HN crowd I guess, but the thanks for taking the time to write a long answer.

"I do use commercial operating systems". Sometimes and opensource other times :)

This would be so easy for the NSA etc. to do that I think we have to consider it as inevitably having occurred.

All they would have had to do is take a close look at any new changes committed to OpenSSL and other critical infrastructure software. Surely they have people doing that -- they would be remiss not to.

Even easier, I would bet a lot of money that they have at least some rudimentary static analysis tools to detect potential targets, and this sort of memory error is pretty low hanging fruit for such a tool. To me it seems almost certain that they knew about it and they certainly exploited it if they knew.

The bigger question to me is how many of these bugs have they rooted out that have not been made public yet?

Why don't we have groups doing that sort of analysis on our behalf? Programmers are at a fundamental disadvantage when it comes to testing and verifying their own code. You can't trust a shop to verify itself when it comes to infrastructure this critical.

Yeah--it should be a no-brainer. Running such critical code through as many static analysis tools you can get your hands on should be standard practice. I wonder why Coverity and the rest havent taken it upon themselves. I remember a story about Coverity running their tool on random open source projects and emailing them about issues they found. Maybe OpenSSL is too far in the hole to start that now.

There is some public work on this:


Unfortunately, most people do not find this to be terribly exciting. Worse still, there is almost no demand for it; most programmers want to quickly hack out something cool, and by the time they begin thinking about security it is too late.

Even easier:

Some dude finds a 0day and sells it to some agency.

What worries me, is that the Snowden leaks didn't seem to have a strong emphasis on SSL encryption suggesting to me that they could circumvent it.

For reference take a look at this article from September. http://www.reuters.com/article/2013/09/05/net-us-usa-securit...

Snowden's files predate the existence of this vulnerability.

No, Snowden's files predate the public knowledge of the vulnerability.

As far as I know, we presently have no way of determining whether or not the NSA had knowledge of the bug.

From the CVE[1], we see that OpenSSL versions from the very start in 2012[2] were vulnerable.

1 https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-01...

2 http://www.openssl.org/source/ (Jan 3 14:41:35 2012 openssl-1.0.1-beta1.tar.gz)

Snowden's files predate the existence of the vulnerability. Many of his files were years old when he exfiltrated them. This vulnerability was created by a specific check-in that has been identified. That does not, of course, mean the NSA didn't use it, or even create it. Both are possible.

Oh, I see what you mean. Fair point.

(It's a wide time range of files he's released so far though, right?)

You'd think that, if any of his files actually covered such a possibility, he would have released that file by now.

Yep, in particular the Guardian article which the linked Reuters one is based on [1], says:

>"For the past decade, NSA has lead [sic] an aggressive, multi-pronged effort to break widely used internet encryption technologies," stated a 2010 GCHQ document. "Vast amounts of encrypted internet data which have up till now been discarded are now exploitable."

> An internal agency memo noted that among British analysts shown a presentation on the NSA's progress: "Those not already briefed were gobsmacked!"

Which certainly sounds like SSL traffic was broadly compromised as far back as 2010.

That doesn't conclusively prove heartbleed isn't of use to these agencies though; for example one possible scenario is that the British analysts were "gobsmacked" by some other undisclosed vulnerability similar in scope to this one, which has since been fixed (and, if you're inclined that way, you could theorize that heartbleed was introduced to replace it..)

[1] http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryp...

Snowden's files predate large-scale use of SSL (except maybe by banking sites, which are essentially already pwned by the government).

> Snowden's files predate the existence of this vulnerability.

The vulnerability is over two years old. I second scott_karana in thinking that you're wrong.

Pardon me for being cynical about this, but from what we've heard about NSA hacking and industry collaboration I would say it's highly likely that a large number of the Certificate Authorities themselves are compromised by the NSA or GCHQ and so it renders the question moot.4 Certificate Authorities control > 90% of the market 3 of them based in the US and 1 in the UK. With access to the CA's keys they can sign any number of certificates they want.

Yes, but using a counterfeit certificate requires a much more active, targeted, and potentially-discoverable attack.

With a counterfeit cert, you could pretend to be a target's email host, for example. But you'd need to be an active man-in-the-middle, and you'd only see information during the sessions you actively hijack. The target, or anyone else you mistakenly man-in-the-middle, might notice the changed certificate/authority, thus sounding alarms or clamming up.

If exploiting heartbleed, in contrast, you'd be taking arbitrarily-many random samples of the email host's private memory, in a manner that even the email host's typical logging would not notice. Over time, you'd likely get many login credentials, app and SSL session keys, and possibly even the site's authentic certificate private key – that's something that even a faithless Certificate Authority can't cough up. (They can certify a fake private key... but they don't have their customer's true private keys.) At that point, unless PFS is enabled, all past and future SSL sessions could be decoded via passive eavesdropping.

So if you had a choice between several collaborating CAs or most of the internet running buggy OpenSSL, you'd pick the buggy OpenSSL. And if you had both, you might very well use the heartbleed bug more often, because it's both less detectable and more likely to offer bulk data for analysis.

My CA can create "twin-me" cert that can be used in future to impersonate me in an active, targeted attack.

Heartbleed can obtain my keys that can be used to passively decode traffic that they recorded a long time ago; and they can do that as random untargeted fishing on scale.

Pardon me for being realistic, but I would say exactly the opposite. If the CAs were compromised, that would be the biggest story by far in Snowden's documents, and it would have appeared in the newspapers by now.

I would say they are compromised just by watching Moxie Marlinspike's presentation about the shitty state of CAs and how he was able to find signing certs just laying around in unprotected directories https://www.youtube.com/watch?v=Z7Wl2FW2TcA

The Snowden documents (that have been released) were actually very light on technical information. The real details of how BULLRUN works are probably compartmentalized to a very small group of people and not accessible to a random sysadmin.

So it is entirely possible that CAs have been compromised and Glenn Greenwald and the rest of those with the Snowden cache have no idea.

As I commented yesterday on HN, if this ever came to light, it would be the Internet's version of a "Lehman Brothers" style collapse.

Thinking about it more, it would actually be awesome. The cabal of CAs would fall and hopefully a bulletproof distributed system model would eventually replace this snakeoil industry.

How would this have been in the documents? Didn't the documents come first? This came later, I thought.

EDIT: ...might've come later. exact timeline probably unknowable.

As I've mentioned elsewhere, heartbleed combined with bulk data collection means all your historic communications can be read unless your provider was using Perfect Forward Secrecy.

I don't think this aspect is getting as much publicity as it should.

> bulk data collection

Including whatever the McDonald's free wi-fi might store? I'm not insinuating they were an actor, but is that how simple it could've been? Anything communicated over unsecure/not-secure-enough wi-fi could've been captured & apparently now decrypted using newly-acquired information?

I'm halfway sure that's what it means. But that would just be crazy, right?

Anything communicated over unsecure/not-secure-enough wi-fi could've been captured & apparently now decrypted using newly-acquired information?

Yes. An attacker would have to collect that information, AND have grabbed the private keys from a vulnerable site. But there is nothing technically stopping that from happening. (And of course I expect there may be a market for those keys now)

But that would just be crazy, right?

Yes. Crazy but possible.

GCHQ have been known to attack IRC networks: https://www.networkworld.com/community/blog/eff-cyber-attack...

It should be illegal for a government to make use of botnets this way.

It is. They don't care.

All is legal for the sovereign. After all, the Law is his tool: why would he consent to its use against him?

I don't know if you're trolling or genuinely don't know how this works. If you don't, please read up on the constitution of whatever country you live in.

The respect for those constitutions has eroded significantly since the beginning of the century, but they still exist and we must still insist on them. Don't give up the achievements of the past that easily.

While we're handing out reading assignments, I'd encourage you to read something that wasn't assigned in junior-high civics class; perhaps Machiavelli? Political power has been exercised for millennia, and its nature is far closer to the caricature I offered than anything written in any newfangled constitution.

The point is that a constitution is not an achievement, the "unlocking" of which would transform a society in any lasting way. It might be more accurate to say that a constitution or similar document is an aspiration, but since few such have been fulfilled it's foolish to be surprised when we fall short. The fault is not in our constitutions, but in ourselves, that we are underlings. We knew when we built this monstrous war and imprisonment machine that it would be turned against us, yet we built it anyway.

The problem is that your statement "all is legal for the sovereign" is just plain false according to the most widely accepted definition of legality.

By propagating it as if it were true, you are in fact playing into the hands of those who want society to bow to a different notion of legality - one where the rule of law has been eliminated. That is, by writing what you write, you are needlessly conceding ground to the bad guys.

I know it's tempting to play (or be?) the jaded cynic. But it seems to me that if we want to keep politicians and government officials accountable to the laws, then a necessary (though of course not sufficient) condition for that is that we insist on calling their actions illegal when they are illegal.

As I cant access this page from Chrome (doesn't let me because "it's not secure") here is the archive.org link


Could it be that because of heartbleed now i can't access eff.org?

Are you joking? If not please report what error you're getting in Chrome.

No joking, where can I report that?

For anyone following along at home, we looked into this and it seems to be caused by the fact that you're using an older operating system that doesn't ship with the StartCom CA cert that eff.org uses. So probably not an attack. :)

EFF uses StartCom?!

Emailing me works. yan at eff dot org.

Probably not, If they did they would have raised these security flaws to the general public in the interest of security.

This wouldn't surprise me one bit. Governments employing hackers to exploit whatever they can get their hands on is not something new.

Also, makes one think what other exploits are out that are being used, yet, we're not aware of it?

My theory is that basically all of our traffic is compromised, we just don't know it yet. It seems clear that the NSA has been actively working to find and exploit every vulnerability they can, and they have the power of a well-funded concerted effort, secret physical access, and gag orders all on their side. I bet they can do a whole lot more than what we know.

Very shameless plug: we just launched a t-shirt campaign with teespring.com. All proceeds will be donated to the OpenSSL Software Foundation:

* Campaign: http://teespring.com/hbts

* HN thread: https://news.ycombinator.com/item?id=7567461

Is it possible for you to allocate those funds to (specifically) funding a security audit and code refactor of OpenSSL? Cryptography researcher Matthew Green has stated interest in starting a campaign: https://twitter.com/matthew_d_green/status/45386223750218547...

yeah, I actually started that campaign. You'll notice it is the tweet he is replying too ;)

I don't get the number 1396891800 - what does it mean?

Number of seconds between Jan 1 1970 and the discovery of Heartbleed, I suppose. The time_t time stamp.

That makes me feel less dumb - I thought I was missing something obvious. Thx!

Obviously it's Mrs. Charlotte Faye Wylie Med's National Provider Identifier Number -- what did you think it was? Did you already forget??!

The shirt would be better with nothing on the back. I would've reflexively clicked buy.

+1, didn't buy due to back

Could the font of 'Never Forget.' be changed to match the one of the timestamp?

How will this help? OpenSSL is broken by design (the process, the code standards, the philosophy). Money won't make it better.


I don't think so, mostly because to get useful information out of memory after only one heartbeat would be quite lucky.

If this were an actual attack, I think we'd see many more heartbeats in Koeman's logs.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact