(If the latter, this could be an unfortunate case where Perfect Forward Security, when enabled, also helps obscure exploits from later forensic discovery...)
"However, a HeartbeatRequest message SHOULD NOT be sent during handshakes. If a handshake is initiated while a HeartbeatRequest is still in flight, the sending peer MUST stop the DTLS retransmission timer for it. The receiving peer SHOULD discard the message silently, if it arrives during the handshake. In case of DTLS, HeartbeatRequest messages from older epochs SHOULD be discarded."
But that doesn't make sense to me because the PoC code didn't complete the handshake did it?
Edit: according to Google the reason is that OpenSSL does not honour the "SHOULD" part of the spec :/
I conjecture that the TLS handshake was used to fingerprint the server, since not all 3 versions of the payload will succeed on all TLS versions.
It has puzzled me quite a bit as nothing like this has (knowingly occurred to me before) and I take a lot of precautions (which for obviously reasons I'm not going to go into) against keyloggers, malware, MITM, etc etc. With such target hardening I was very suspicious of how it occurred.
Ofcourse maybe I was sleep talking my passwords again :)
Edit: Keylogged for the past 10 years without knowing it, across 5 different machines, with different architectures and operating systems. :-)
If you don't mind me asking the question I always ask people when helping with their security (both cyber and physical) and eliminating an element of potential paranoia:
Would your work/life make you a worthwhile legitimate target? (don't mean to sound rude but I guess it differentiates between random attacks and targeted ones)
Like are there algorithms that predict if XYZ could be a legitimate target in 15 yrs and if so, targeting starts now.
If you run the agency and you have this technology & its cheap and easy to execute, would you use this or would you wait for a target to become valuable before putting them on a list.
I say the above cause there is dearth of imagination which leads people to be very complacent on this topic.
Personally if I were an evil intel agency I'd be going after GPU developers and manufacturers at all costs to get at their firmware sources or even possibly find ways to sabotage it at the source. It's the final frontier of awesome evilware potential.
- the execution of GPU code, and transfer of data between device and host do not require admin privs so it will always run regardless of what the host system privilege settings are.
- Malware w/Nvidia GPUs can be statically linked with the CUDA library in a standalone hidden file that never touches the operating system.
- GPU memory is not shared with the CPU so encrypted malware can reside there undetected.
- Run-time polymorphism: malware GPU code can be re-encrypted with a new random key thus mutate in completely random ways that would be difficult to detect even if you dumped the GPU memory on a regular basis.
- GPU NSA code can easily access the screen framebuffer, and broadcast a live link of whatever somebody is doing.
- GPU NSA code present the user with a nothing is wrong desktop pretending the virus scanner is still running, hiding daemons, presenting false browser screens hiding the fact SSL certs have been rejected, all sorts of evil.
I would say that generally it depends on the threat model and risk versus potential adversary (some African states would be different to China in capabilities etc). I think also a big part is really internalising and enforcing the idea that you will be breached at some point and acting accordingly, to minimise the data lose and be able to recover. So for example, if your only doing low risk stuff then the convenience of something like Windows/Mac and normal email service outweighs any risks from a breach (as long as you know how to access risk correctly and take the sensible/basic precautions). However for higher risk stuff then generally open-source solutions can potentially offer more security (too many to mention but tons of good ones here - Tails, TOR etc: https://prism-break.org/).
I would caveat though that I am not someone who always drink the open-source koolaid all of the time. I think the recent Heartbleed and suspicions about TrueCrypt goes to show that occasionally we are make assumptions about the security of these tools based on a false sense of security. Proper security is a longer process of peer review, code audits etc that everyone kind of just takes for granted are done with open-source projects.
Sometimes I think it's a little bit like the "free rider" problem, were everyone (including so many in relation to OpenSSL) is happy to use the tool but don't want to spend time/money/brainpower looking, adding, reviewing the underlying code. Then we get a lot of myths emerging (TrueCrypt is very safe for example - that might be the case but has it really been audited yet? Also, who is doing the auditing? etc).
Obviously the fact that open source stuff is free and the code can be looked at and eventually fixed is a very good thing. Also, usability is a security feature and I think some of the secure open-source alternatives have problems in this area - though the work of the Guardian Project, Whisper Systems etc is doing a great job of fixing this aspect of things.
Apologies if I got a bit sidetracked on this answer. If there is more specific advice you want, drop me a mail to the email (low risk stuff only) in my profile.
Oops, the strange decisions at Tails saves the day.
I still don't understand their reluctance to patch grsecurity and/or pax. It's been on their roadmap for ages. Liberte Linux included it since their first release too bad the guy abandoned the project last year. Probably the same thing happened to Maxim as the creators of Anonym.OS they were hired to make secure builds for companies and stopped maintaining them.
Of course anybody here can roll their own live distro of BSD/Nix and just review both projects design documents to see the kinds of security decisions made to implement them as well but the vast public is stuck with Tails which keeps ballooning in size. They need a light version without tons of codecs and video editing software, collaborative editors or full office suites.
Note that your examples of distros that adopted PaX/grsecurity are also your examples of distros that have been abandoned.
You need to pick your battles. Being based on Debian oldstable, which is still receiving security support while being a stable base to work on, is a fine decision. Not introducing a big change like grsecurity which will inevitably lead to having to tweak dozens of packages to continue to work properly (and thus have to maintain your own forks of said packages, without being able to easily pull security updates from upstream), is also a reasonable decision.
I invite you to create your own distro if you disagree. Please show me your up to date, privacy and security oriented distro; I'd love to see a comparison of how your style of maintainership will differ from that of Tails.
> hasn't updated the release since March so a dozen known Debian security advisories are not patched
It could be that none of the current vulnerabilities seriously affect Tails' stated use cases. Do they? We know that Heartbleed does not affect Tails. Only CVE-2014-2653 looks remotely relevant to me but I'm no expert. An out-of-schedule Tails release would steal a lot of development time which instead could be used to improve Tails permanently so they are not done unnecessarily.
> it took them years to even add a macchanger on boot for wireless
Judging by their design document it seems like a pretty big undertaking to do properly without causing a huge user support mess and giving a false sense of security. I for one understand if they chose to deprioritize it for some years in favor of other lower hanging fruit of similar importance. Personally I used to run macchanger manually when needed as the tool itself has been included in Tails for as long as I have been using it (four years).
> Oops, the strange decisions at Tails saves the day.
Since Debian Squeeze still receives security updates this does not seem very strange at all. I believe the "decision" is another consequence of their lack of man power. While slightly annoying at times I can live with out-dated packages that lack features I would like to have as long as they receive security upgrades.
> I still don't understand their reluctance to patch grsecurity and/or pax.
The Tails team has explicitly stated that they do not have the human resources available to afford maintaining all that themselves. Hopefully the Debian kernel team will get their shit together and provide a hardened kernel flavor some time soon.
> [...] the vast public is stuck with Tails which keeps ballooning in size. They need a light version without tons of codecs and video editing software, collaborative editors or full office suites.
One danger with a lightweight Tails distribution is that users are forced to mix data and activities with their normal, insecure OS, which potentially can hurt their anonymity and leave a data trail, thus countering Tails' two main points. Another danger is that the users get tired of Tails' inability to do basic, expected activities and stop using it completely, switching back to their normal, insecure OS for activities that they really should use Tails for.
That said, I do agree that fringe use cases like video editing should be removed since Tails' size is becoming a real concern. The "additional software" feature available when running Tails from a USB drive should be promoted more actively for such edge cases. Shipping less packages in Tails should also decrease the Tails team's maintenance burden so this actually looks like something worthwhile issuing a feature request for.
The rest is common knowledge to the #HN crowd I guess, but the thanks for taking the time to write a long answer.
All they would have had to do is take a close look at any new changes committed to OpenSSL and other critical infrastructure software. Surely they have people doing that -- they would be remiss not to.
The bigger question to me is how many of these bugs have they rooted out that have not been made public yet?
Unfortunately, most people do not find this to be terribly exciting. Worse still, there is almost no demand for it; most programmers want to quickly hack out something cool, and by the time they begin thinking about security it is too late.
Some dude finds a 0day and sells it to some agency.
For reference take a look at this article from September.
As far as I know, we presently have no way of determining whether or not the NSA had knowledge of the bug.
From the CVE, we see that OpenSSL versions from the very start in 2012 were vulnerable.
(Jan 3 14:41:35 2012 openssl-1.0.1-beta1.tar.gz)
(It's a wide time range of files he's released so far though, right?)
>"For the past decade, NSA has lead [sic] an aggressive, multi-pronged effort to break widely used internet encryption technologies," stated a 2010 GCHQ document. "Vast amounts of encrypted internet data which have up till now been discarded are now exploitable."
> An internal agency memo noted that among British analysts shown a presentation on the NSA's progress: "Those not already briefed were gobsmacked!"
Which certainly sounds like SSL traffic was broadly compromised as far back as 2010.
That doesn't conclusively prove heartbleed isn't of use to these agencies though; for example one possible scenario is that the British analysts were "gobsmacked" by some other undisclosed vulnerability similar in scope to this one, which has since been fixed (and, if you're inclined that way, you could theorize that heartbleed was introduced to replace it..)
The vulnerability is over two years old. I second scott_karana in thinking that you're wrong.
With a counterfeit cert, you could pretend to be a target's email host, for example. But you'd need to be an active man-in-the-middle, and you'd only see information during the sessions you actively hijack. The target, or anyone else you mistakenly man-in-the-middle, might notice the changed certificate/authority, thus sounding alarms or clamming up.
If exploiting heartbleed, in contrast, you'd be taking arbitrarily-many random samples of the email host's private memory, in a manner that even the email host's typical logging would not notice. Over time, you'd likely get many login credentials, app and SSL session keys, and possibly even the site's authentic certificate private key – that's something that even a faithless Certificate Authority can't cough up. (They can certify a fake private key... but they don't have their customer's true private keys.) At that point, unless PFS is enabled, all past and future SSL sessions could be decoded via passive eavesdropping.
So if you had a choice between several collaborating CAs or most of the internet running buggy OpenSSL, you'd pick the buggy OpenSSL. And if you had both, you might very well use the heartbleed bug more often, because it's both less detectable and more likely to offer bulk data for analysis.
Heartbleed can obtain my keys that can be used to passively decode traffic that they recorded a long time ago; and they can do that as random untargeted fishing on scale.
So it is entirely possible that CAs have been compromised and Glenn Greenwald and the rest of those with the Snowden cache have no idea.
Thinking about it more, it would actually be awesome. The cabal of CAs would fall and hopefully a bulletproof distributed system model would eventually replace this snakeoil industry.
EDIT: ...might've come later. exact timeline probably unknowable.
I don't think this aspect is getting as much publicity as it should.
Including whatever the McDonald's free wi-fi might store? I'm not insinuating they were an actor, but is that how simple it could've been? Anything communicated over unsecure/not-secure-enough wi-fi could've been captured & apparently now decrypted using newly-acquired information?
I'm halfway sure that's what it means. But that would just be crazy, right?
Yes. An attacker would have to collect that information, AND have grabbed the private keys from a vulnerable site. But there is nothing technically stopping that from happening. (And of course I expect there may be a market for those keys now)
But that would just be crazy, right?
Yes. Crazy but possible.
The respect for those constitutions has eroded significantly since the beginning of the century, but they still exist and we must still insist on them. Don't give up the achievements of the past that easily.
The point is that a constitution is not an achievement, the "unlocking" of which would transform a society in any lasting way. It might be more accurate to say that a constitution or similar document is an aspiration, but since few such have been fulfilled it's foolish to be surprised when we fall short. The fault is not in our constitutions, but in ourselves, that we are underlings. We knew when we built this monstrous war and imprisonment machine that it would be turned against us, yet we built it anyway.
By propagating it as if it were true, you are in fact playing into the hands of those who want society to bow to a different notion of legality - one where the rule of law has been eliminated. That is, by writing what you write, you are needlessly conceding ground to the bad guys.
I know it's tempting to play (or be?) the jaded cynic. But it seems to me that if we want to keep politicians and government officials accountable to the laws, then a necessary (though of course not sufficient) condition for that is that we insist on calling their actions illegal when they are illegal.
Could it be that because of heartbleed now i can't access eff.org?
Also, makes one think what other exploits are out that are being used, yet, we're not aware of it?
* Campaign: http://teespring.com/hbts
* HN thread: https://news.ycombinator.com/item?id=7567461
I don't think so, mostly because to get useful information out of memory after only one heartbeat would be quite lucky.
If this were an actual attack, I think we'd see many more heartbeats in Koeman's logs.