
Heartbleed - crumbly
https://www.schneier.com/blog/archives/2014/04/heartbleed.html
======
fidotron
One of the things I'm taking from this episode is just how worthless a lot of
prognostication about security by a load of "experts" on the internet is,
especially of the "trust us to get it right" variety.

Frankly, their whining about how hard crypto is is partly responsible for the
monoculture we have. Yes, it's difficult (more so in protocol than
implementation), but they are so offputting to new people coming into the
field it is insane.

Clearly OpenSSL dev is broken, at least partly because everyone assumes
everyone else is auditing all 300k lines of it, but also I can't help
wondering if this calls for stronger component isolation within cryptosystems.
For example, protocol implementation, encoding and decoding seem like they
should all be totally isolated, so a disaster like this doesn't mean you could
be leaking information from the rest of the system. I imagine many a HSM
vendor has been quite pleased by this news.

~~~
mikeash
This is (I hope) a once-in-a-lifetime incident, so we have to be careful not
to extrapolate it too hard. On the other hand, it's _seriously big deal_.
Overall, it seems to strongly challenge a couple of important assumptions.

As you say, it's a strong challenge to the mantra of "never implement your own
crypto". I think ( _think_ ) it still holds for the crypto primitives. If
you're reimplementing AES you're probably doing it wrong. But the protocols?
I'm not so sure now. Common wisdom seemed to be that if you implement the
protocols yourself you'd screw them up, and you should stick with tried-and-
true existing implementations. Now it's apparent that "tried" doesn't have to
imply "true". Something being used by millions of people for years doesn't
prevent it from having a huge vulnerability for years. Are your odds better or
worse rolling your own? I'm not so sure now.

Consider Apple's "goto fail" bug, for example. Among a lot of other stuff,
they caught some criticism for reimplementing TLS instead of just using
OpenSSL. Well, if they had used OpenSSL instead, it turns out that they would
have been shipping an even more serious bug for even more time.

It's also interesting to me how it challenges the idea of encrypting stuff by
default. For years, people have been saying that as much traffic as possible
should be encrypted, even unimportant stuff that nobody cares about. By doing
that, the idea goes, using encryption isn't suspicious and you force attackers
to spread out their resources. If only a small amount of traffic is encrypted,
attackers can focus just on that traffic. Accordingly, a lot of sites that
didn't really need it enabled SSL or even required it, including my own. By
doing this, a lot of them inadvertently made things much much _worse_. A site
that's only accessible over HTTP is much better off than a site that's
accessible over HTTPS but vulnerable to heartbleed. I don't think the general
idea is _wrong_ , but it certainly gives me pause, and I think more
consideration has to be given to the increase in attack surface you take on
when you enable encryption.

In any case, I really hope we see some new crypto projects come out of this,
or more resources put into existing OpenSSL alternatives.

~~~
ISL
> This is (I hope) a once-in-a-lifetime incident, so we have to be careful not
> to extrapolate it too hard.

Don't bet on it.

~~~
mikeash
Yeah, my inner pessimist says that's a stupid thing to say. On the other hand,
when was the last time something _this_ bad and widespread showed up?

~~~
gojomo
Let's surmise that the reason two distinct researchers (Mehta and Codenomicon)
found this same bug in a short timeframe is that the recent Apple & GnuTLS
bugs have caused many teams to begin a fresh review of long-ignored shared
codebases.

If so, is this the _first_ major bug discovered, with many more to come as
they are flushed out by the new level of vigilance? Or, is it the only/last
one, being revealed now because the deep dive has now wrapped up?

Those seem to me to be the interesting questions.

------
area51org
_" Catastrophic" is the right word. On the scale of 1 to 10, this is an 11._

No, it's not. If you asked me — I was a CISO not many years ago — I'd call
this an 8. Schneier means well, but he has a tendency to exaggerate. (Here is
an example of him suggesting that SOAP and other web services never be used
because they "sneak" through HTTP and are therefore inherently insecure:
[https://www.schneier.com/crypto-
gram-0006.html#SOAP](https://www.schneier.com/crypto-gram-0006.html#SOAP))

A 10 would be a case where a bug was not easily patched, and gave complete
control of servers to any interested script kiddie. Thousands or millions of
web users would have had enough information for credit card and identity theft
to be acted on before the hole could be plugged.

This is not that case. And it's certainly not an "11". Most vital websites
have either already been patched or are about to be.

We can now keep calm and carry on.

~~~
orthecreedence
I'm going to have to go ahead and sort of disagree with you. A security
library, specifically doing crypto, that is installed/embedded/used
_everywhere_ that trivially leaks plaintext data remotely to _anyone who comes
knocking_ (including passwords, keys, CC numbers, etc) is a compete failure.

Sure, it could hand out shells into a remote system, or hell it could launch a
bunch of nuclear rockets as well...that would be very bad. But you seem to
miss the point that perhaps someone's password to their shell (or maybe a
nuclear launch code) _is_ going over the wire and is intercepted...by a
hostile government agency, or a 13 year old playing with a python script.
There are endless, devastating scenarios one can think up caused by such a
critical bug in the very fabric of the secure communication of the internet.

This is a pretty big deal.

~~~
btbuildem
You're right - but I think a rating of 11 should indeed be reserved for a bug
that does launch nuclear rockets while handing out root shells.

------
nzp
Eh, I remember how about a month ago when that recent GnuTLS bug was found the
almost dominant sentiment on HN was along the lines of "how come anyone is
using GnuTLS instead of OpenSSL", "GnuTLS codebase is horrible, use OpenSSL",
"the guy maintaining GnuTLS is an idiot, use OpenSSL", "OpenSSL has more
expert eyes on it", etc. Although I prefer OpenSSL (for no particular reason),
this all seemed so obviously stupid and shortsighted, not to mention some of
it factually wrong. And what do you know, a month later we get an order of
magnitude worse bug in OpenSSL which was also probably an order of magnitude
easier to detect. I made a comment[0] then along that line, thinking to myself
that I'd really hate if I got to say "told you so" but that unfortunately I
probably will get the chance. I didn't think it would be this bad though.

[0]
[https://news.ycombinator.com/item?id=7346879](https://news.ycombinator.com/item?id=7346879)

------
clarkevans
Even if we generate a new key pair and replace our certificate, aren’t we
still vulnerable to MIM attacks if someone had downloaded the old private key
and use the old certificate?

~~~
higherpurpose
That's why it's so vital for everyone to implement Perfect Forward Secrecy.
Yes, it's a little late for that now in regards to this bug, but who knows
what others bugs like this will be discovered in the future. Let's at least
not make the same mistake twice, by not taking advantage of PFS, which
could've prevented most of the damage from Heartbleed.

~~~
Perseids
As much as I'm a fan of Perfect Forward Secrecy, it does not protect you
against MITM with old certificates.

------
efficientarch
Also see this explanation from Troy Hunt, which hasn't made the front page.
[https://news.ycombinator.com/item?id=7558597](https://news.ycombinator.com/item?id=7558597)

~~~
yiedyie
TL, DR:

Troy Hunt: ”The Heartbleed bug itself was introduced in December 2011, in fact
it appears to have been committed about an hour before New Year’s Eve (read
into that what you will). The bug affects OpenSSL version 1.0.1 which was
released in March 2012 through to 1.0.1f which hit on Jan 6 of this year. The
unfortunate thing about this timing is that _you’re only vulnerable if you’ve
been doing “the right thing” and keeping your versions up to date!_ Then
again, for those that believe you need to give new releases a little while to
get the bugs out before adopting them, would they really have expected it to
take more than two years? Probably not.”

~~~
krallja
Debian squeeze (oldstable) is not vulnerable, because it's still running
0.9.8o-4squeeze14.

0.9.8o was released 2010-06-01, almost 4 years ago!

~~~
plorkyeran
[http://patch-
tracker.debian.org/package/openssl/0.9.8o-4sque...](http://patch-
tracker.debian.org/package/openssl/0.9.8o-4squeeze14)

It's had a _lot_ of security fixes backported to it in the four years since
the upstream release.

------
tptacek
There is virtually no useful software vulnerability for which you can't
conjure up a compelling-sounding narrative of deliberate introduction (or
"bugdoor"). It's like numerology. So you should be wary of people insinuating
about bugs.

What makes Dual_EC so compelling to experts is the "Nobody but us" nature of
the flaw: the bug is cryptographically limited to a small number of actors.
FedGov buys hundreds of millions of dollars of COTS gear with OpenSSL
embedded, and this bug is so simple that middle-schoolers are exploiting it.
You shouldn't even need to ask if it was deliberate.

------
yp_master

                                 Schneier.com Has Moved
    
       As of March 3rd, Schneier.com has moved to a new server. If you've used a
       hosts file to map www.schneier.com to a fixed IP address, you'll need to
       either update the IP to 204.11.247.93, or remove the line. Otherwise,
       either your software or your name server is hanging on to old DNS
       information much longer than it should.
    

Ok, how should I "authenticate" that the site at the new address is the "real"
one?

I know, I'll use OpenSSL and HTTPS!

------
cliveowen
More on the vulnerability from cryptographer Matthew Green:

[http://blog.cryptographyengineering.com/2014/04/attack-of-
we...](http://blog.cryptographyengineering.com/2014/04/attack-of-week-openssl-
heartbleed.html)

~~~
rossjudson
I am curious about which static code analysis tools pick up this problem.
Could it have been found automatically by Coverity, for example?

~~~
BudVVeezer
We tested most of the major ones at work on the faulty code, and only PC-Lint
caught the issue.

~~~
rossjudson
Wow -- that's scary.

------
feelstupid
Forgive my naivety here, but is there any way to tell which sites over the
last 2 years I/we have used that may require new passwords, and whether
they've been fixed?

I'm kinda looking at a site that lists the major sites (Banks, Socials, Shops,
etc) that shows a status on whether you're fine / should change password /
await fix before change password)

~~~
CanSpice
Yes. All of them.

Seriously, it's easier to just change all of your passwords than to hunt down
a list (that will be incomplete and give you a false sense of security),
cross-match against servers you might have an account on, then change their
passwords.

Just change them all and be done with it.

The "whether they've been fixed" part is a little tougher, because that lets
you know when you should change your password. General sentiment I've been
seeing is give it a week for everybody to fix their stuff (even this might be
a little long) and then change your passwords. If a given site says either "we
weren't affected, here's why" or "we've patched our stuff, we're all good"
then you should change your password on that site ASAP.

~~~
jeff303
When would be the optimal time to perform these password changes? I am
assuming that not every affected site has been patched yet, and it would be
pointless to change the password, and log in, before they have fixed the
problem.

~~~
sliverstorm
Use one of the testers to check if the website is currently vulnerable.

~~~
ams6110
And then use your browser's certificate inspector to check the issue date of
the certificate. If it's earlier than April 7, 2014, it's still insecure.

~~~
sliverstorm
Only if they were impacted by the bug.

It's rather unreasonable to expect sites that know they were not impacted will
update their certificate. So unless you want to write off your bank's website
for the next year or three until the date expires & they renew it then, (banks
seem to have avoided this- suddenly dawdling behind the bleeding edge doesn't
look so bad!) scorched-earth policies are a bit much.

Actually I might even say the opposite; if the site is secure and the
certificate is older than 4/7/2014, that suggests the site was not impacted.
If the certificate is newer than 4/7/2014, that pretty much guarantees the
site was impacted. It is possible the site patched openssl and did not renew
the cert, but in general people are not going to do one without the other.

------
rbanffy
On the XKCD [0] you can find a spoof of the Tears in rain soliloquy. I never
imagined it was rewritten on the night before shooting by Rutger Hauer.

[0] [https://xkcd.com/1353/](https://xkcd.com/1353/)

[1]
[https://en.wikipedia.org/wiki/Tears_in_rain_soliloquy](https://en.wikipedia.org/wiki/Tears_in_rain_soliloquy)

------
bcohen5055
"At this point, the odds are close to one that every target has had its
private keys extracted by multiple intelligence agencies" Any proof of this?

~~~
danielweber
It's bullshit.

I'm 100% positive that many targets have had their keys extracted, but it's
hard-to-impossible for the attacker to choose what fragment of memory the
server returns, and it depends heavily on the server in question. What works
against nginx won't work against lighttpd or apache.

I hit a site I control repeatedly yesterday and couldn't even get any common
byte-arrays in common across hundreds of connections.

Of course, as good practice, all organizations should treat their keys as
compromised and issue new ones.

Also, his "it leaves no trace" is a problem. It's trivial to recognize the
traffic pattern.

------
jypepin
And that's why every single login system should have two-factor auth - I
started using google's authenticator app for my google and github account and
it works just great. I wish I could use it for every account I have.

~~~
eridius
SSL is not login.

~~~
jypepin
Sorry, I didn't mean to say that TFA was the solution, but I think login info
leak is one of the consequences of heartbleed, right? So my point was just, at
least with TFA, the risk of having your accounts stolen is reduced.

------
TomGullen
Just a question (and I don't know too much about this). Is there any chance
that certificate authorities who give out warranties could actually have to
pay out on them now? Do any of them use OpenSSL?

~~~
danielweber
Without looking at the specifics, the CA can't be held responsible for you
leaking the key yourself.

Or do you mean if the CA companies _themselves_ were compromised? That's a big
separate issue. Even if the web process is the one that generates the keys
(I'm skeptical, but it's possible), any keys made that way would quickly be
moved out memory, unless they were made that day.

------
facepalm
Is there really a point in changing keys and passwords? It seems to me that if
an attacker got the passwords, I should assume they already installed a root
kit on my server?

I'm honestly not sure how to react. I'm not really a sysadmin, but I have a
server online.

I suppose I could start a new server, but how can I be sure that the provider
has already patched all their holes? If they've been hacked, maybe the images
they use for preparing new servers have been compromised, too? Might be better
to wait a little before restarting everything from scratch?

------
fixermark
Request for clarification from those who understand the bug's workings:

The memory it can expose is limited to that visible to the process using
openssl, right? Or does the bug reside low enough in the kernal stack to
disregard memory protections?

~~~
mattgreenrocks
> The memory it can expose is limited to that visible to the process using
> openssl, right

Yes. Only that process.

------
dllthomas
One question - this keeps talking about attackers being able to "read all of
memory." Does anyone know whether that's limited to the process that is
running OpenSSL code?

------
mrfusion
So if I have a site that doesn't use SSL, am I still affected? I'm still
trying to make sense of all of this.

~~~
apawloski
No, if you're not using OpenSSL your site will not be vulnerable to an OpenSSL
bug.

But if your site transmits or receives anything that a third party shouldn't
see (hint: it probably does), you should start using SSL.

~~~
atonse
I would rather say...

If you're not using SSL right now, there's no rush to upgrade, but do it
anyway while this is in the forefront because when you do use SSL one day on
your server, you might forget that you had this old version of OpenSSL.

~~~
mikeash
And there may be other things besides web servers using OpenSSL that you
didn't think of or aren't aware of.

For example, I believe that using curl to fetch an https URL leaves you open
to this vulnerability if you connect to a malicious server. The odds of the
server being bad and the odds of curl containing anything of value are low,
but it still counts for something.

------
pvnick
At this point it's safer to say that an intelligence agency is responsible
than that they aren't responsible. This is precisely what Schneier, Greenwald,
et al. mean when they say that the NSA tactics degrade the security of the
overall internet architecture. It's incredibly dangerous.

~~~
rtpg
How can you make such a claim? Do you have any proof that they were involved
with this specific bug?

I get that the NSA is after us but when you consider that the bug is of the
exact same class as _a bug every C programmer has ever made in their career_ ,
it seems probable that it could have happened on accident. Where do you see
the malicious intent?

~~~
clef
If the NSA is all knowing as everybody seems to think, why aren't all
criminals in jail by now?

~~~
drcube
Because they're not the police. And the jails are already full.

~~~
mikeash
Also the criminal justice system doesn't accommodate evidence obtained through
the NSA's methods.

~~~
dllthomas
Until they lie about how they got it.

[http://en.wikipedia.org/wiki/Parallel_construction](http://en.wikipedia.org/wiki/Parallel_construction)

~~~
mikeash
Even then, that's way too much effort to go through to convict a car thief or
other similarly small stuff.

~~~
dllthomas
Unless there's some other reason they want to nail that car thief, probably,
yeah.

~~~
mikeash
That's an interesting thought. Some terrorist is in the US and planning
something, but they don't want to give away their intel. So do a bit of
parallel construction and tip off the local cops to some relatively small
crime he's committed in the course of everything....

Or substitute "whistleblower" or "inconvenient politician" for "terrorist" if
you prefer.

~~~
dllthomas
Or even "ex-" or "rival".

~~~
clef
Oh wow thanks HN, being down voted to oblivion for asking a question, nice one
thanks!

~~~
dllthomas
I didn't downvote you, but I expect those who did aren't reading it as "a
question" as in "a request for information" \- which I would think should
rarely be downvoted - but as "a rhetorical question" whose purpose was to
serve as a point of argument. Interpreted that way it seems to be attacking a
strawman, poorly - such a comment would be deservedly downvoted.

------
0x006A
Why is his server still vulnerable?

~~~
dfa0
He's running on a Windows box?[IIS]

~~~
Rynant
IIS doesn't use OpenSSL though; it uses CredSSP
[http://technet.microsoft.com/en-
us/library/cc755284(WS.10).a...](http://technet.microsoft.com/en-
us/library/cc755284\(WS.10\).aspx).

~~~
dfa0
I had no idea about IIS using a different implementation.

The More You Know...

------
jl6
Interesting point for me is that those who serve over HTTP only are not
vulnerable, which I think is a good case study in the risk of complexity. A
lot of security experts have been calling for HTTPS everywhere, on the basis
that it is low cost. Clearly there is a cost to the extra complexity, and in
this case a bug in the security layer that results in a worse situation than
if there had been no security layer at all.

~~~
awj
> Interesting point for me is that those who serve over HTTP only are not
> vulnerable

...yes they are. You don't even _need_ remote private key disclosure to MITM
an http-only server. The way HTTP digest is written means you either store all
passwords in a retrievable form or drop down to basic auth where anyone
capable of base64 decoding can read all passwords.

This situation is _by no means_ worse than if everyone had just used plain
HTTP.

~~~
subleq
It is worse than plain HTTP, actually. Heartblead allows an attacker anywhere
on the internet to read out memory from your server. This is worse than plain
HTTP in two ways:

\- With plain HTTP, the attacker would have to be in a MITM position to
intercept traffic. With Heartblead, he can read traffic he wouldn't normally
have access to from the server's memory.

\- There may be secrets in memory that would never even be sent over the
network that are now accessible. For example, if running a web app in the same
process doing SSL termination, private keys such as Django's SECRET_KEY may be
available. Under certain situations, knowledge of the SECRET_KEY can effect
remote code execution.

In short, Heartbleed gives the entire world the ability to read memory from
your server. This is much worse than an HTTP MITM.

