Also, what a great name.
The remaining of the page is a loud reminder of the gap between the sec and dev communities, at least as practiced in lolstartupland. Or at least between offence and defence. The second paragraph tells you the sky is falling, and then it takes them 13 questions to tell you which openssl versions are vulnerable.
(Also, I wish the behind the scenes action was less messy; why not coordinate with Debian and RedHat patches? Why did Cloudflare get advance notice?)
"The security community refers to vulnerabilities by numbers, not names. This does have some advantages, like precision and the ability to Google them and get meaningful results all of the time"
I wish everyone embedded a Dewey Decimal number into their factual pages. Would be ace.
"I saw some kvetching on Twitter to the effect that the logo designer heard about Heartbleed before the distribution maintainers at e.g. Ubuntu and RedHat did."
Updates for Debian and CentOS landed within hours. Would have been nice to have them as we read the page.
Interestingly nothing (apparent) for Manjaro yet. Manjaro is a staged version of Arch which I have installed on a test machine to sample Gnome 3.12 when it lands in the repository a week or so.
[keith@mocha ~]$ openssl version
OpenSSL 1.0.1f 6 Jan 2014
perhaps. I get the impression that Manjaro (and other similar client OSes) are mainly for end users and not on servers.
CVE-2014-0160 is the official reference to this bug. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE. Due to co-incident discovery a duplicate CVE, CVE-2014-0346, which was assigned to us, should not be used, since others independently went public with the CVE-2014-0160 identifier.
Manjaro: update arrived this morning, may have been available for past 12 hours, so promoted through the 'staging' cycle. Half a day to a day later than Ubuntu/Debian/Redhat if I have my times correct.
[keith@mocha ~]$ openssl version
OpenSSL 1.0.1g 7 Apr 2014
Step 1: Run this command. If it returns "Your vulnerable!" go to step 2.
And so on, with actionable steps that people could quickly understand and circulate.
It was clear from the beginning that as soon as the details became public, a race would begin for the script-kiddy-friendliest tool to own sites/users. And the most likely targets of script kiddies should be warned in advance.
AWS is at least as important, as is Akamai.
My point being, it's not enough to hand-wave about who's the biggest and most important. A good system would give anyone with enough at risk a clear path to earn a seat at the table.
Major providers could create an "early warning disclosure club", each contributing some money annually, and the money can be used to pay bounties to anyone who gives them advance warning of a zero day. Of course you'd want some safeguards to make sure nobody blackhat joins the club to use the vulnerabilities for offense.
It's hard to argue: yes, the world would be better if only the people who were going to mitigate the bug had the information until everyone had mitigated. But a repeal of the CAP theorem would be nice too. Meanwhile, we have to work with the world we have, not the one we want.
Early disclosure club isn't a terrible idea, but good luck getting it funded in a serious way. The correlation between "most impacted" and "most clueful" isn't particularly strong.
As for "most clueful": I have often lamented the current pendulum swing toward a centralized internet. But this is one area where all the centralization of infrastructure has a benefit. An awfully large fraction of the internet's data is in the hands of a small number of relatively tech-clueful organizations.
How many of Heroku's or AWS's or Akamai's customers would still be unpatched right now if those customers were managing things themselves? I'd put any of those organizations safely ahead of their median customer in the "clue" department.
I'm not going to disagree with your main point though: funding for security is always an uphill battle. When it's working great, nothing happens.
This raises the question about whether Amazon/AWS was notified prior to public disclosure.
Because if you notify the bad guys at the same time, then you've made it worse, since people you didn't notify don't know anything is up yet.
I'm not sure how you can handle this in any different way as they did, really.
...independently discovered by a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security
But the content? A loud reminder indeed. As a member of the dev community, I would have wanted to see the following:
1. How bad is it? If you're using SSL, then an attacker may be able to read your machine's memory without leaving a trace.
2. Who is affected? Users of OpenSSL versions X-Y. Check your site here [http://filippo.io/Heartbleed/], but your client code may be affected too!
3. How do I secure myself? Update to OpenSSL version Z, reboot, and consider resetting all sensitive data on your server (reissue your SSL certificate, reset your user passwords and sessions, etc).
These facts are littered throughout a 2,000+ word document. In the future, I would like to see these things answered plainly at the very top.
For example, at our place, "Munin" or recently "Graphite" have been established as the name for our monitoring systems. They describe a system spanning a couple hundreds servers, include a handful of different daemons and configurations and generally, a lot that's going on, so the term is inherently ambiguous and imprecise.
However, I've found that this takes a lot of pressure from the less involved people. They don't need to figure out how to call something precisely and correctly. They have an accepted, not entirely correct term that's precise enough to get the point across: "Munin on Server X broke" is all I need. Similarly, "Is our server X affected by Heartbleed?" might be a silly question because server X is no webserver, but it's easy to answer, because the question is precise enough and just on the right level.
As a teacher, I give silly names to maths topics and it seems to help the students organise their 'big picture' a bit.
You have to be kidding me. It took so long to decipher what I wanted to know that I went elsewhere.
Edit: "masterful communication" this is not, since the reader doesn't know who the page is aimed at. Even a line at the top saying "Technical people go _here_", and then something aimed at technical people would be better.
For example if I announce a new processor, I'll announce its clock speed, number of cores. If I feel like getting in depth cache levels, and bus speed. A technical person will still have a million questions. But my announcement isn't for them, its for the lowest common denominator of people who care. Often times who have no clue of every technical aspect. Only the most simplistic understanding of the topic, if any at all.
It's not for non-technical folks, either, because there's nothing they can possibly do other than be confused.
It's empty self-promoting marketing that sent the entire industry scrambling.
For updates to be deployed, the patches need to be integrated, tested, packages/updates build, and the update mechanisms tested. For complex systems -- like, say, embedded hardware -- this might involve targeting quite a few different devices and testing matrixes.
Even scrambling, this can take days, and leaves users blowing in the wind in the meantime.
This is why we have coordination with vendors PRIOR to public release, such that when the vulnerability is publicly disclosed, updates are available through standard update pipelines, the process is documented, and the update is known to be correct and not introduce deployment regressions.
A vulnerability of this severity needed no marketing. Grandstanding for non-technical users simply increased the likelihood that they'd be exploited while vendors rushed out fixes.
I understand that vendors find it inconvenient to field questions from users like "Are you vulnerable to Heartbleed?", most particularly when they are, in fact, vulnerable to Heartbleed. I respect that Yahoo feels embarrassed that there is a screenshot showing usernames and passwords in the clear. I think that the feelings of Yahoo users who would be discomfited that their email accounts are available to anyone with a command line deserves at least as much deference.
I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.
Do you have a study? I remember an article here that suggested most Windows attacks were created by reverse-engineering MS patches rather than by discovering the vulnerabilities or reading about them on mailing lists; if that's the threat model then co-ordinating so that most vendors release patches at the same time is safer even if it means waiting longer for a patch.
Once you have a whiff of where the bug is, it's dramatically easier to find it. You don't need to know exactly what the bug is; you just need to reduce the problem from "read all of OpenSSL" to "read a small subset of OpenSSL". Once that narrowing of the target space happens, independent discovery is inevitable. The people most motivated to do that discovery work don't have any of your best interests at heart.
> I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.
And yet, this is true. A small number of people with a vulnerability provides a small threat exposure, because their attacks are simply more likely to be targeted.
Everyone with a vulnerability provides a large threat exposure, because suddenly every single script kiddie on the planet had a window to target a Python script at Yahoo or GitHub or Amazon and troll through web server's memory.
You think it was worth exposing GitHub's private company repositories to every script kiddie on earth, just because a small number of people had an incredibly valuable zero-day that they would wish to hold in reserve for high priority targets, lest it get burned and they lose the zero-day?
Are you talking about Responsible Disclosure? Cause I thought that existed because if security researchers tell vendors in private only, then the vendors sit on it and do nothing, but if you tell the public first, users are vulnerable before the vendor releases a fix.
Isn't the only reason there's a public release as a threat to keep the vendors honest?
I figure there is because most of the HN community inhabits it, and I most certainly do.
That's an enormous win.
It's among the most widespread Internet bugs, but:
* An identical bug impacted nginx a few years ago
* A far worse bug impacted Debian (when they commented out the randomness in their CSPRNG), which coughed up code execution on tens of thousands of machines; lots of companies that didn't officially deploy on Debian still had a Debian box somewhere vulnerable
* The Rails YAML bug was perniciously exposed in lots of places for months after the initial disclosure, and also coughed up code execution
Losing authenticators for "live" users and TLS private keys is bad, but it's not the kind of bad where you invariably need to nuke your servers from orbit and rebuild. Other widespread bugs were actually like that.
I think that cost is outweighed by the significant increase in exposure.
I'm not saying to make it difficult for people to understand the root cause. We should strive for both. But if I had to choose one over the other I think for a bug this big that marketing it as such wins.
The long tail.
Normally, to get out of the standard sysadmin patch rut and into an expedited state, your bug needs to convincingly cough up code execution. Since this bug didn't do that, but was nonetheless very severe, it makes perfect sense to me that additional marketing was required to expedite fixes.
Compared to what? Awareness and server-side adoption of fixes for issues at this level of criticality is not a problem our industry has.
A problem we used to have, however, is people engaging in grandstanding and irresponsible disclosure, leaving users insecure and catching the industry flat-footed.
Empty scare marketing is a solution to a problem we don't have. It's also a great tool for self-advertisement, and I can only assume that's why patio11 jumped on it.
I refer you back to this recent article, which was also discussed extensively here: http://arstechnica.com/security/2014/03/ancient-linux-server...
'Our industry' is not just the parts people are proud of, it includes a multide of craptacular bit players that nevertheless participate in the information economy.
I probably don't agree with that. I think there are plenty of systems in our industry that aren't actively maintained and don't have dedicated ops to manage them. A fix to a bug this large in magnitude needs to find their way onto those systems.
I think this is a case where the message could have been more concise.
I can't change the settings as I am not a TalkTalk customer (to my knowledge, my connection has remained functional despite mergers: Freeserve -> Wanadoo -> Orange -> EE). I certainly don't have a 10 digit customer reference and my account email is 'unknown' to the filter.
Cameron's cyber-nanny can be circumvented for eminently respectable domains such as this by judicious use of ?oo?le Cache of course.
Anyone else from the UK with default filter settings seeing this? I'm about to write to my M.P. and some wider data points would be helpful.
I have used the 'report' button: perhaps they will unblock the domain when they realise it is about Bingo.
MOODLE is an eminently useful free software course management system. A PHB I used to work for got very worried about the 'silly name' and the lack of an 0845 number for when anything went wrong. Took ages to convince him that it was a sensible alternative to another well known course management system that cost a couple of teacher salaries per year.
We got there.
Security issues of this calibre need more media sensationalism.
Giving the appropriate importance to things is part of what makes good journalism.
I watched my colleagues working around the clock (not that bad as it sounds - we are scattered around the planet for a reason) patching servers, testing and ensuring every hatch is properly shut. I can imagine other teams all over the world and all over the internet doing the same, literally saving our civilization from a threat only a tiny percentage of the population had any idea existed and an even smaller group has any idea of how it threatened us.
Marketing works both ways, you know.
Serious security vulnerabilities do their own marketing for the people that need to know about them.
This is just lowering security to the tabloid level for mass consumption by users who can't fix the issue anyway.
The only people that holding back (after the vulnerability becomes public knowledge) helps are the attackers.
There's a lot of different kinds of public. "Possibly in the wild" is very different than "available to every script kiddie under the sun".
The same arrogance that makes someone think they are the first to uncover an exploitable security bug makes it sound perfectly natural to build your own memory manager when the one provided by the OS has "bad enough" performance.
Unfortunately, sometimes you need to force people into doing things for their own benefit.
I don't look fondly on those days.
So while all the marketing has been great for Codenomicon, it caused most sysadmins and distro maintainers more headache than it should have.
> Marketing Helps Accomplish Legitimate Goals
Are you kidding me? The only goal of a security issue should be fixing it and getting everyone else to update to the fix. Heartbleed will be remembered forever because of the BS marketing.
OpenSSL isn't a startup, it's a security library that is used by over half of the internet.
Forcing the entire world to scramble is great marketing, but poor security. Vendors needed time to prep releases and communications; there's tons of confusion flying around out there.
Likewise, patio11's trying to capitalize on the awareness to market himself may also be great marketing, but it's bad advice.
I don't know why parent is being downvoted, either. This is simply not how you keep people secure. This is how you grandstand to promote yourself at the cost of other people's security.
That's what I took away from this as well.
HN loves startup porn, but how is this any different than how self-help books are marketed?
Distrust anyone who makes a business out of telling other businesses how to be successful in business because of how they're successful in business.
The people who are successful don't have time to run consultancies, and anyone that knows anything about consultancies knows that the lessons you learn there are very different than what's useful and necessary for product companies.
patio11 first made a business, then did consulting, and is now building another business.
Not that it matters. Who cares where the advice comes from if it works? And if it doesn't work, the purest motives in the world isn't going to make it work.
Given how widely deployed OpenSSL is, and how many of those systems are run by part-time or amateur sysadmins who aren't going to be monitoring CVE lists constantly, getting the word out that (1) there's a huge problem and (2) here's how you fix it is of paramount importance.
 - http://en.wikipedia.org/wiki/Thomas_Schelling#The_Strategy_o...
Marketing is useful to get sysadmins too lazy to subscribe to security announcement mailing lists to apply the already-released patches or take other mitigation.
Which, let's be honest, is the vast majority of people who admin servers these days.
With cloud servers, VPSes, etc., anyone can become a "sysadmin," and lots of people do who don't really understand what they are signing up for. These are the people running the unpatched boxes that Ars Technica recently called "the slum houses of the Internet." (http://arstechnica.com/security/2014/03/ancient-linux-server...)
Those people aren't going to patch their system just because a CVE was issued. They don't know what a CVE is. So marketing the problem is critical to reach them and get them off their duffs.
This marketing page was effective communication not just to the public, but the hundreds of thousands of technical people that needed to understand that this disclosure was different, they needed to take action, which in this world of plentiful managed hosting, is really not typical.
If that is the case, your FMEA needs to include undisclosed vulnerabilities in your communication channel's encryption, and the mitigation can't be telling the internet your particular opinions on responsible disclosure.
Arguing for people with privileged access to the exploit to behave the way you want when disclosing it, is a lot like arguing that people with privileged access to the exploit behave the way you want when exploiting it (ie: don't exploit). When human safety relies on an encrypted channel, you have no option but to assume people aren't going to act the way you want. If you could get people to act the way you want, you wouldn't need to use an encrypted channel in the first place.
Because they vend updates to the vast majority of users. It's about maximizing people's ability to get the fix quickly upon public disclosure.
> ... they needed to take action, which in this world of plentiful managed hosting, is really not typical.
What action do you think you need to take? Revoking certificates? Almost nothing checks OCSP or CRLs anyway, there's hardly a rush.
All this marketing has done is sent ill-informed people scurrying.
Waiting until Debian gets its shit in order is not sufficient for a number of friends of mine who work at places that take security very seriously. They disabled access as soon as it became public knowledge.
The marketing is just confusing people, and patio11's advocacy for more irresponsible marketing-focused disclosure is self-promotional, ambulance chasing, and irresponsible in the extreme.
Small ecommerce owner gets OpenSSL from their hosting vendor. Lazy sysadmin is running 'apt-get update' && 'apt-get install', and if he's not, there are 50 other serious vulnerabilities he's open to anyway.
Speed matters after the disclosure, when every petty criminal and script kiddy in the world is suddenly empowered.
And it lends itself to many typos which is one of my areas of expertise along with branding. I can't easily tell someone "just go to kal zum e us dot com" like I can "heart bleed" (which by the way has a typo that would leak in high volume traffic to "blead" a bit).
Other than that I agree with what Patrick is saying, although I did find the use of "heartbleed" with something also referred to as "heartbeat" (which of course wouldn't be available as a domain name) a bit confusing at first.
But: are there enough two-english-word combinations left as viable .com names, much less ones that accurately describe the vulnerability?
In this specific case, I would prefer resources spent to make the OpenSSL library itself better instead of the https://www.openssl.org/ domain better.
That being said I agree with the article and love how http://heartbleed.com/ was done.
For example I might not like Facebook, but if they'd actually make such a contribution to the public good I'd always have to include that counter argument in my criticism.
Maybe some one here on hackernews might be able to pull some strings?
> Man, would that have been an easier month if
> we had all been talking about DeserialKiller.
Serial Killer (Yeah, drops the "De", but more people will associate with it, and it's easier to parse and pronounce.)
Edit: not sure why this was downvoted, but if it contains an error please add a comment pointing it out. If you just think it should be lower on the page, no worries.
In contrast to OpenSSL, the YAML vulnerability was just a very minor blip of importance.