Hacker News new | past | comments | ask | show | jobs | submit login
What Heartbleed Can Teach The OSS Community About Marketing (kalzumeus.com)
362 points by spatulon on Apr 9, 2014 | hide | past | web | favorite | 113 comments



Yes entirely on name, visual identity and first three paragraphs. More like this for serious vulns, please.

Also, what a great name.

The remaining of the page is a loud reminder of the gap between the sec and dev communities, at least as practiced in lolstartupland. Or at least between offence and defence. The second paragraph tells you the sky is falling, and then it takes them 13 questions to tell you which openssl versions are vulnerable.

(Also, I wish the behind the scenes action was less messy; why not coordinate with Debian and RedHat patches? Why did Cloudflare get advance notice?)


But can we still have the CV numbers as well please.

"The security community refers to vulnerabilities by numbers, not names. This does have some advantages, like precision and the ability to Google them and get meaningful results all of the time"

I wish everyone embedded a Dewey Decimal number into their factual pages. Would be ace.

"I saw some kvetching on Twitter to the effect that the logo designer heard about Heartbleed before the distribution maintainers at e.g. Ubuntu and RedHat did."

Updates for Debian and CentOS landed within hours. Would have been nice to have them as we read the page.

Interestingly nothing (apparent) for Manjaro yet. Manjaro is a staged version of Arch which I have installed on a test machine to sample Gnome 3.12 when it lands in the repository a week or so.

    [keith@mocha ~]$ openssl version
    OpenSSL 1.0.1f 6 Jan 2014
Sort of ties in with

http://allanmcrae.com/2013/01/manjaro-linux-ignoring-securit...

perhaps. I get the impression that Manjaro (and other similar client OSes) are mainly for end users and not on servers.


The CV number is on http://heartbleed.com/

CVE-2014-0160 is the official reference to this bug. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE. Due to co-incident discovery a duplicate CVE, CVE-2014-0346, which was assigned to us, should not be used, since others independently went public with the CVE-2014-0160 identifier.


Replying to my own comment as edit time has passed.

Manjaro: update arrived this morning, may have been available for past 12 hours, so promoted through the 'staging' cycle. Half a day to a day later than Ubuntu/Debian/Redhat if I have my times correct.

    [keith@mocha ~]$ openssl version
    OpenSSL 1.0.1g 7 Apr 2014


There is no doubt that masterful branding of the bug helped with patching of vulnerable systems in this case. It is not at all clear that the trend it will surely start will be a good thing. Marketing does improve visibility. But it also, inherently obscures the truth. Even in this case: some people on HN don't know that it was a Google researcher who first discovered/reported the bug; the actionable/technical information on the bug was hidden below the fold because the primary goal of the page was to be a long term marketing tool for the security firms, not the shortest path to patch vulnerable systems. We will see how this trend develops but I would not be surprised if we get more and more marketing with less upside (necessary visibility) and more downsides.


I seriously wished they had something like:

Step 1: Run this command. If it returns "Your vulnerable!" go to step 2.

And so on, with actionable steps that people could quickly understand and circulate.


Because Cloudflare is possibly the biggest and most vulnerable target due to the enormous number of websites and businesses relying on it. I would not be surprised if at least FB and Twitter also had early access.

It was clear from the beginning that as soon as the details became public, a race would begin for the script-kiddy-friendliest tool to own sites/users. And the most likely targets of script kiddies should be warned in advance.


> Because Cloudflare is possibly the biggest and most vulnerable target due to the enormous number of websites and businesses relying on it.

AWS is at least as important, as is Akamai.

My point being, it's not enough to hand-wave about who's the biggest and most important. A good system would give anyone with enough at risk a clear path to earn a seat at the table.

Major providers could create an "early warning disclosure club", each contributing some money annually, and the money can be used to pay bounties to anyone who gives them advance warning of a zero day. Of course you'd want some safeguards to make sure nobody blackhat joins the club to use the vulnerabilities for offense.


Software security used to work this way, in the early 90s. Disclosures went to vendor cabals. They leaked like sieves and were a running joke on #hack.

It's hard to argue: yes, the world would be better if only the people who were going to mitigate the bug had the information until everyone had mitigated. But a repeal of the CAP theorem would be nice too. Meanwhile, we have to work with the world we have, not the one we want.

Early disclosure club isn't a terrible idea, but good luck getting it funded in a serious way. The correlation between "most impacted" and "most clueful" isn't particularly strong.


Indeed, if one posits a channel for transmitting secrets among a vendor cabal which never leaks to people not authorized to receive the secrets, we should abandon SSL and use that for our secure communication needs instead.


It still obviously does work this way, just very informally. That is, such cabals clearly exist and clearly get early access. If things seem different now, perhaps it's because the timelines are shorter or the cabals more consolidated.

As for "most clueful": I have often lamented the current pendulum swing toward a centralized internet. But this is one area where all the centralization of infrastructure has a benefit. An awfully large fraction of the internet's data is in the hands of a small number of relatively tech-clueful organizations.

How many of Heroku's or AWS's or Akamai's customers would still be unpatched right now if those customers were managing things themselves? I'd put any of those organizations safely ahead of their median customer in the "clue" department.

I'm not going to disagree with your main point though: funding for security is always an uphill battle. When it's working great, nothing happens.


Information isn't supposed to stay in the Cabal for long. If you want 6 months (or, more typically 5 years) to fix a serious vulnerability, you shouldn't get any kind of useful protection.


Akamai was notified and completed the patch in advance of public disclosure according to their Heartbleed FAQ: https://blogs.akamai.com/2014/04/heartbleed-faq-akamai-syste...

This raises the question about whether Amazon/AWS was notified prior to public disclosure.


How do you notify a large-ish group of a secret vuln without notifying any of the bad guys?

Because if you notify the bad guys at the same time, then you've made it worse, since people you didn't notify don't know anything is up yet.


And what about Google? and Amazon? And banks? And .gov sites?

I'm not sure how you can handle this in any different way as they did, really.


Google was one of the discoverers. From the page:

...independently discovered by a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security


Would be interesting to know if the production people at Google rolled out a fix before general announcement. E.G. is internal communication in such a large organisation still faster than intertube?


Google discovered the bug.


there are plenty of more important sites than FB and twitter, sites that store credit card info, bitcoins or generally any sensitive info. FB and Twitter are essentially public sites, if you post something there it is likely for public consumption (limited public but still there for people to see). If someone gets a password for your FB account it is far less serious than if they get your amazon password or your banking password.


Heartbleed is not super descriptive to layfolk but it's certainly catchy, I'll give you that. Having a dot-com domain with said simple name was great, too.

But the content? A loud reminder indeed. As a member of the dev community, I would have wanted to see the following:

1. How bad is it? If you're using SSL, then an attacker may be able to read your machine's memory without leaving a trace.

2. Who is affected? Users of OpenSSL versions X-Y. Check your site here [http://filippo.io/Heartbleed/], but your client code may be affected too!

3. How do I secure myself? Update to OpenSSL version Z, reboot, and consider resetting all sensitive data on your server (reissue your SSL certificate, reset your user passwords and sessions, etc).

These facts are littered throughout a 2,000+ word document. In the future, I would like to see these things answered plainly at the very top.


offtopic: do you have a trademark for the lolstartupland word? ;-)


I'm noticing this at work, too. Give things - even entire contexts - short, pronouncible names.

For example, at our place, "Munin" or recently "Graphite" have been established as the name for our monitoring systems. They describe a system spanning a couple hundreds servers, include a handful of different daemons and configurations and generally, a lot that's going on, so the term is inherently ambiguous and imprecise.

However, I've found that this takes a lot of pressure from the less involved people. They don't need to figure out how to call something precisely and correctly. They have an accepted, not entirely correct term that's precise enough to get the point across: "Munin on Server X broke" is all I need. Similarly, "Is our server X affected by Heartbleed?" might be a silly question because server X is no webserver, but it's easy to answer, because the question is precise enough and just on the right level.


I read that as Moomin at first.

As a teacher, I give silly names to maths topics and it seems to help the students organise their 'big picture' a bit.


"The Heartbleed announcement ... is masterful communication."

You have to be kidding me. It took so long to decipher what I wanted to know that I went elsewhere.

Edit: "masterful communication" this is not, since the reader doesn't know who the page is aimed at. Even a line at the top saying "Technical people go _here_", and then something aimed at technical people would be better.


The announcement isn't for technical folks. If you want to know the in depth details then you likely have no issue referring to bugs as numbers, or reading up on the technical details of the exploit. When you announce something you give the information in a form that the public can understand.

For example if I announce a new processor, I'll announce its clock speed, number of cores. If I feel like getting in depth cache levels, and bus speed. A technical person will still have a million questions. But my announcement isn't for them, its for the lowest common denominator of people who care. Often times who have no clue of every technical aspect. Only the most simplistic understanding of the topic, if any at all.


> The announcement isn't for technical folks.

It's not for non-technical folks, either, because there's nothing they can possibly do other than be confused.

It's empty self-promoting marketing that sent the entire industry scrambling.


Any result other than "the entire industry scrambling" following the public disclosure of Heartbleed would have been a failure case. How do you imagine that working out? "We need to patch millions of servers managed by hundreds of thousands of people. It all needs to happen today, and be conducted in total secrecy, because if any bad guy finds out about what we're doing there will bet net-wide exploitation by botnets running trivial PoC code within 30 minutes. But no need to scramble, nah, let's do this with due deliberation."


You're demonstrating no understanding of how these things work.

For updates to be deployed, the patches need to be integrated, tested, packages/updates build, and the update mechanisms tested. For complex systems -- like, say, embedded hardware -- this might involve targeting quite a few different devices and testing matrixes.

Even scrambling, this can take days, and leaves users blowing in the wind in the meantime.

This is why we have coordination with vendors PRIOR to public release, such that when the vulnerability is publicly disclosed, updates are available through standard update pipelines, the process is documented, and the update is known to be correct and not introduce deployment regressions.

A vulnerability of this severity needed no marketing. Grandstanding for non-technical users simply increased the likelihood that they'd be exploited while vendors rushed out fixes.


I'm the guy who had to do it at my company, and in a previous career I'd be the boots-on-the-ground dealing with it at a rather larger company. I understand that vendors want a few weeks. I have lived the reality of ponderous engineering processes which need weeks to approve the smallest imaginable change. The for loop does not care what we want and does not get slower at counting to big numbers just because we are slow at counting to small numbers.

I understand that vendors find it inconvenient to field questions from users like "Are you vulnerable to Heartbleed?", most particularly when they are, in fact, vulnerable to Heartbleed. I respect that Yahoo feels embarrassed that there is a screenshot showing usernames and passwords in the clear. I think that the feelings of Yahoo users who would be discomfited that their email accounts are available to anyone with a command line deserves at least as much deference.

I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.


> I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.

Do you have a study? I remember an article here that suggested most Windows attacks were created by reverse-engineering MS patches rather than by discovering the vulnerabilities or reading about them on mailing lists; if that's the threat model then co-ordinating so that most vendors release patches at the same time is safer even if it means waiting longer for a patch.


Coordinated release of patches in closed-source software is possible because the people dealing with the source code are NDA'd. Rails attempted a coordinated release of the YAML bug and it was a total clusterfuck: they "soft-released" the bug with a vague notice about potential database corruption, and 1000 people simultaneously re-discovered the bug over the next two days by looking at the code. Then, everyone involved in Rails got the scope of the bug slightly wrong, and variant vulnerabilities followed for the next couple weeks.

Once you have a whiff of where the bug is, it's dramatically easier to find it. You don't need to know exactly what the bug is; you just need to reduce the problem from "read all of OpenSSL" to "read a small subset of OpenSSL". Once that narrowing of the target space happens, independent discovery is inevitable. The people most motivated to do that discovery work don't have any of your best interests at heart.


This isn't about inconvenience, it's about having patches in user's hands the moment the vulnerability hits the public.

> I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.

And yet, this is true. A small number of people with a vulnerability provides a small threat exposure, because their attacks are simply more likely to be targeted.

Everyone with a vulnerability provides a large threat exposure, because suddenly every single script kiddie on the planet had a window to target a Python script at Yahoo or GitHub or Amazon and troll through web server's memory.

You think it was worth exposing GitHub's private company repositories to every script kiddie on earth, just because a small number of people had an incredibly valuable zero-day that they would wish to hold in reserve for high priority targets, lest it get burned and they lose the zero-day?


This is why we have coordination with vendors PRIOR to public release, such that when the vulnerability is publicly disclosed, updates are available through standard update pipelines

Are you talking about Responsible Disclosure? Cause I thought that existed because if security researchers tell vendors in private only, then the vendors sit on it and do nothing, but if you tell the public first, users are vulnerable before the vendor releases a fix.

Isn't the only reason there's a public release as a threat to keep the vendors honest?


Clearly technical and non-technical understand is a binary relationship. Is there a grey zone of understanding when it comes to Transport Layer Encryption?

I figure there is because most of the HN community inhabits it, and I most certainly do.


But you did. What the communication accomplished was getting others who otherwise might not have heard about or cared enough to do something to take measures in fixing it.

That's an enormous win.


He did because this is the worst internet bug in the past 10 years, not because the page was so masterfully written. Private keys and user passwords/data being disclosed will be cared about by systems administrators even without such a fancy page.


It is not the worst Internet bug in the past 10 years.

It's among the most widespread Internet bugs, but:

* An identical bug impacted nginx a few years ago

* A far worse bug impacted Debian (when they commented out the randomness in their CSPRNG), which coughed up code execution on tens of thousands of machines; lots of companies that didn't officially deploy on Debian still had a Debian box somewhere vulnerable

* The Rails YAML bug was perniciously exposed in lots of places for months after the initial disclosure, and also coughed up code execution

Losing authenticators for "live" users and TLS private keys is bad, but it's not the kind of bad where you invariably need to nuke your servers from orbit and rebuild. Other widespread bugs were actually like that.


This bug is on 70% of systems and ANYONE can run a python script and pull out plaintext Paypal or bank passwords. It is the worst Internet bug perhaps ever.


I don't know a single vulnerability researcher who agrees with that statement. But you also didn't marshal any evidence; you restated the first thing I said about the bug, and then effectively said "no, you're wrong".


That's my point. Systems administrators will fix security bugs regardless. So there's not really any negative impact on how Heartbleed was "marketed" besides making their job a little bit harder.

I think that cost is outweighed by the significant increase in exposure.

I'm not saying to make it difficult for people to understand the root cause. We should strive for both. But if I had to choose one over the other I think for a bug this big that marketing it as such wins.

The long tail.


Systems administrators WILL NOT invariably fix security bugs no matter what. They'll apply patches as they are made conveniently available, and during maintenance windows. This bug demanded a faster remediation, and a more consistent one, than most bugs do.

Normally, to get out of the standard sysadmin patch rut and into an expedited state, your bug needs to convincingly cough up code execution. Since this bug didn't do that, but was nonetheless very severe, it makes perfect sense to me that additional marketing was required to expedite fixes.


> That's an enormous win.

Compared to what? Awareness and server-side adoption of fixes for issues at this level of criticality is not a problem our industry has.

A problem we used to have, however, is people engaging in grandstanding and irresponsible disclosure, leaving users insecure and catching the industry flat-footed.

Empty scare marketing is a solution to a problem we don't have. It's also a great tool for self-advertisement, and I can only assume that's why patio11 jumped on it.


Awareness and server-side adoption of fixes for issues at this level of criticality is not a problem our industry has.

I refer you back to this recent article, which was also discussed extensively here: http://arstechnica.com/security/2014/03/ancient-linux-server...

'Our industry' is not just the parts people are proud of, it includes a multide of craptacular bit players that nevertheless participate in the information economy.


> Awareness and server-side adoption of fixes for issues at this level of criticality is not a problem our industry has.

I probably don't agree with that. I think there are plenty of systems in our industry that aren't actively maintained and don't have dedicated ops to manage them. A fix to a bug this large in magnitude needs to find their way onto those systems.


"I know this is bad... but what exactly is broken... oh."

I think this is a case where the message could have been more concise.


I agree. I saw the Heartbleed page while surfing HN, clicked through, saw something long with no TL;DR in red: THIS IS THE WORST SECURITY BUG IN MANY YEARS, AND YOUR SERVER IS VERY LIKELY AFFECTED, and ignored it for a couple of hours until I started seeing more and more posts about it.


Where? And if Heartbleed took to long to figure out, how long did it take to decipher other security vulnerabilities? Don't compare it to a landing page of a consumer service, compare it to most other OSS announcements and projects.


UK Offtopic: kalzumeus.com is being blocked under the category 'gambling' for me by the TalkTalk HomeSafe filter. First time I've seen the filter. My ADSL over copper connection is provided by EE.

https://dl.dropboxusercontent.com/u/8403291/talktalk-blockin...

I can't change the settings as I am not a TalkTalk customer (to my knowledge, my connection has remained functional despite mergers: Freeserve -> Wanadoo -> Orange -> EE). I certainly don't have a 10 digit customer reference and my account email is 'unknown' to the filter.

Cameron's cyber-nanny can be circumvented for eminently respectable domains such as this by judicious use of ?oo?le Cache of course.

Anyone else from the UK with default filter settings seeing this? I'm about to write to my M.P. and some wider data points would be helpful.

I have used the 'report' button: perhaps they will unblock the domain when they realise it is about Bingo.


Maybe MITRE should assign proper names to serious CVEs, kind of like hurricanes?


Oh, great. Then in a few years we can have minor security issues given names, too. Like how winter storms this past winter were called "Polar Vortexes." This world needs less media sensationalism, not more.


They can just use NSA-style semi-random codenames. Every CVE can be automatically assigned a pair of words out of a hat. It'll be particularly beautiful when combined with already-silly software names. I want to have to tell my boss that Raring Ringtail has been affected by Nevada Horseshoe or somesuch.


Please don't joke about this kind of stuff.

MOODLE is an eminently useful free software course management system. A PHB I used to work for got very worried about the 'silly name' and the lack of an 0845 number for when anything went wrong. Took ages to convince him that it was a sensible alternative to another well known course management system that cost a couple of teacher salaries per year.

We got there.


"Polar vortex" is not a proper name. Complaining about that is like complaining that we have given names to things like "buffer overflow bug" or "double free".

http://en.wikipedia.org/wiki/Polar_vortex


Media sensationalism is important when it's a very serious issue.

Security issues of this calibre need more media sensationalism.


That is not sensationalism. Much the opposite.

Giving the appropriate importance to things is part of what makes good journalism.


"Polar vortex" has been in the lexicon of meteorologist for decades now.


"Your bosses / stakeholders / customers / family / etc also cannot immediately understand, on hearing the words “Rails YAML deserialization vulnerability”, that large portions of the Internet nearly died in fire."

I watched my colleagues working around the clock (not that bad as it sounds - we are scattered around the planet for a reason) patching servers, testing and ensuring every hatch is properly shut. I can imagine other teams all over the world and all over the internet doing the same, literally saving our civilization from a threat only a tiny percentage of the population had any idea existed and an even smaller group has any idea of how it threatened us.


You don't think for a second that the reason you were all working so hard to fix this is entirely because of the marketing? The intense marketing of Heartbleed alerted legit crackers (who would have found out anyway), and a thousand times worse, it alerted wannabe crackers of low hanging security exploit fruit.

Marketing works both ways, you know.


My apologies, I accidentally downvoted this. I strongly agree -- this should have NOT been publicly marketed in this way until vendors had some to assembly updates, and possibly not even then.

Serious security vulnerabilities do their own marketing for the people that need to know about them.

This is just lowering security to the tabloid level for mass consumption by users who can't fix the issue anyway.


The moment it was made public information, there was absolutely no reason whatsoever to hold back on marketing it. Even if your chosen Linux distribution isn't quite ready to go with an easy fix, by being aware of the problem you're a lot more prepared to deploy a fix the moment it becomes available.

The only people that holding back (after the vulnerability becomes public knowledge) helps are the attackers.


> The moment it was made public information, there was absolutely no reason whatsoever to hold back on marketing it.

There's a lot of different kinds of public. "Possibly in the wild" is very different than "available to every script kiddie under the sun".


Communication of the vulnerability was mind-bogglingly bad - knowledge of its existence became widespread well before major distros had patches ready - but the fact remains our servers were vulnerable (and had been vulnerable for a very long time) and needed to be patched ASAP. We must err on the side of caution and consider everyone that should not know of the vulnerability was already fully aware of it and capable of exploiting it.

The same arrogance that makes someone think they are the first to uncover an exploitable security bug makes it sound perfectly natural to build your own memory manager when the one provided by the OS has "bad enough" performance.


Intense marketing forces everyone to fix the problem because every bad guy has just been told about the vulnerability. But one could reasonably argue that it's better than let the process drag for days, weeks or months, with all the serious bad guys still knowing about the issue (don't believe for a second that secret disclosure to vendors won't leak immediately to some criminal darknets).

Unfortunately, sometimes you need to force people into doing things for their own benefit.


I remember when the antivirus companies would fight about who gets to name what. Didn't one try to name Slammer "Sapphire" after a stripper an engineer had seen the previous night?

I don't look fondly on those days.


I don't have a problem with making fanfare around the bug, but I cannot help but feel that the Linux and BSD distro maintainers should have been notified before it went public so that the patches would be available at the same time as the site goes up. Instead, Codenomicon caused them to have roughly 16-24 hour delay in releasing patched versions, while doing a poor job of communicating which versions of libssl are vulnerable (1.0.1 a-f were vulnerable, yet most distros use 1.0.1e and they patched that version instead of upgrading to 1.0.1g, making things very confusing).

So while all the marketing has been great for Codenomicon, it caused most sysadmins and distro maintainers more headache than it should have.


Yes, not notifying at least the big linux distros and BSD projects was quite irresponsible. Everyone except for a few chosen service providers like cloudflare was thrown under a bus here.


I can't disagree with this post enough. Security exploitations shouldn't be about marketing. Security exploits should be handled first and then communicated to the public after the fact. The way Heartbleed was handled lead to a media firestorm. Other than Codenomic, who else benefitted from this?

> Marketing Helps Accomplish Legitimate Goals

Are you kidding me? The only goal of a security issue should be fixing it and getting everyone else to update to the fix. Heartbleed will be remembered forever because of the BS marketing.

OpenSSL isn't a startup, it's a security library that is used by over half of the internet.


Yes, a thousand times yes. The point isn't to market a vulnerability, the point is to get a fix out there.

Forcing the entire world to scramble is great marketing, but poor security. Vendors needed time to prep releases and communications; there's tons of confusion flying around out there.

Likewise, patio11's trying to capitalize on the awareness to market himself may also be great marketing, but it's bad advice.

I don't know why parent is being downvoted, either. This is simply not how you keep people secure. This is how you grandstand to promote yourself at the cost of other people's security.


"Likewise, patio11's trying to capitalize on the awareness to market himself"

That's what I took away from this as well.


How is patio11 using this "to capitalize" and "to market himself"? Anyone who follows him knows that security-related "PSAs" are a staple of his Twitter feed. Combine that with the fact that he writes about marketing on a regular basis and this post is very much par for the course. Both his PSAs and his material on marketing and business are a real service to everyone. That's why they've been so popular on HN for years.


> Both his PSAs and his material on marketing and business are a real service to everyone. That's why they've been so popular on HN for years.

HN loves startup porn, but how is this any different than how self-help books are marketed?

Distrust anyone who makes a business out of telling other businesses how to be successful in business because of how they're successful in business.

The people who are successful don't have time to run consultancies, and anyone that knows anything about consultancies knows that the lessons you learn there are very different than what's useful and necessary for product companies.


I agree with you in a general sense, but I disagree with you in this particular case.

patio11 first made a business, then did consulting, and is now building another business.


Bingo Card Creator. Let's just keep the scale in mind here.


Bingo Card Creator and Appointment Reminder. The latter is big enough he's started doing angel investing. Keep in mind, he runs both of them by himself and he no longer consults. I'm not saying he doesn't have an ulterior motive, but his motives are considerably more pure than 99% of the articles on hacker news.

Not that it matters. Who cares where the advice comes from if it works? And if it doesn't work, the purest motives in the world isn't going to make it work.


The path to fixing it is in part through marketing. A lot of companies need to be aware about how dangerous this vulnerability is. Look how hard is it to get them to upgrade to latest TLS or most modern/secure ciphersuits, and so on. If marketing can help convince them to do it a lot sooner, than god speed.


The problem is that just "getting a fix out there" isn't enough. People have to deploy that fix for it to mean anything. And the way you get people to deploy a fix is to make them aware that they need to.

Given how widely deployed OpenSSL is, and how many of those systems are run by part-time or amateur sysadmins who aren't going to be monitoring CVE lists constantly, getting the word out that (1) there's a huge problem and (2) here's how you fix it is of paramount importance.


I believe that in this case marketing is a great way to get a fix out there. It's a commitment act, Schelling-style [0]. They basically forced everybody in the world to drop everything and fix this issue right fucking now. The seriousness of Heartbleed warrants that level of marketing, IMO.

[0] - http://en.wikipedia.org/wiki/Thomas_Schelling#The_Strategy_o...


Part of getting the fix out there is marketing. If the fix is out there and no one knows about it, or the powers that be don't care, what good is the fix?


Wait, surely getting everyone to scramble is a good way to get the fix released soon?


Marketing is entirely the wrong way to get the people who release fixes to scramble. At least at the top few tiers (package developers and distribution maintainers) you know the organizations necessary to contact, and how to contact them. If the orgs are worth their salt, a descriptive email to their security contacts is faster and easier than a marketing campaign.

Marketing is useful to get sysadmins too lazy to subscribe to security announcement mailing lists to apply the already-released patches or take other mitigation.


> Marketing is useful to get sysadmins too lazy to subscribe to security announcement mailing lists to apply the already-released patches

Which, let's be honest, is the vast majority of people who admin servers these days.

With cloud servers, VPSes, etc., anyone can become a "sysadmin," and lots of people do who don't really understand what they are signing up for. These are the people running the unpatched boxes that Ars Technica recently called "the slum houses of the Internet." (http://arstechnica.com/security/2014/03/ancient-linux-server...)

Those people aren't going to patch their system just because a CVE was issued. They don't know what a CVE is. So marketing the problem is critical to reach them and get them off their duffs.


No, it's a good way to get half-broken fixes rushed out the door while users are left blowing in the wind due to premature grandstanding public release.


Who is "the public" here? Why should package maintainers be hearing about this any sooner than me? I may not help maintain a popular Linux distribution, but I may very well run a service that my customers' bodily safety depends on the encryption of the SSL connection. (Hypothetical.) After disclosure, my only option is not to wait for a fix from my vendors and service providers, but to shut down my service (and lock out my customers) until a fix is available from them (or my own efforts) hours later. Otherwise, the bad actors who would benefit from seeing their SSL traffic would have hours to do so, and for some of us that can cost lives.

This marketing page was effective communication not just to the public, but the hundreds of thousands of technical people that needed to understand that this disclosure was different, they needed to take action, which in this world of plentiful managed hosting, is really not typical.


I may very well run a service that my customers' bodily safety depends on the encryption of the SSL connection

If that is the case, your FMEA needs to include undisclosed vulnerabilities in your communication channel's encryption, and the mitigation can't be telling the internet your particular opinions on responsible disclosure.


Whew. Good thing I don't run such a service. :-) (Edited my comment to better emphasize that it was hypothetical.)


I knew it was hypothetical, I was just building off my experience with this as a reality and not a thought experiment. I think that your hypothetical is useless to this conversation. The seriousness of death is a good argument tool, but using it in the context of responsible disclosure is theatrics.

Arguing for people with privileged access to the exploit to behave the way you want when disclosing it, is a lot like arguing that people with privileged access to the exploit behave the way you want when exploiting it (ie: don't exploit). When human safety relies on an encrypted channel, you have no option but to assume people aren't going to act the way you want. If you could get people to act the way you want, you wouldn't need to use an encrypted channel in the first place.


Yup. It's a great point. I do frequently mention the safety aspect in conversations about secure channels because I know that was how the importance of the work was pitched to me when I worked with a VPN provider in the past. (As a developer, but not in a role where I would have anything to do with the FMEA you mentioned. I had to look that acronym up.) I think it's a good point for people to keep in mind.


> Why should package maintainers be hearing about this any sooner than me?

Because they vend updates to the vast majority of users. It's about maximizing people's ability to get the fix quickly upon public disclosure.

> ... they needed to take action, which in this world of plentiful managed hosting, is really not typical.

What action do you think you need to take? Revoking certificates? Almost nothing checks OCSP or CRLs anyway, there's hardly a rush.

All this marketing has done is sent ill-informed people scurrying.


> What action do you think you need to take?

Waiting until Debian gets its shit in order is not sufficient for a number of friends of mine who work at places that take security very seriously. They disabled access as soon as it became public knowledge.


No, a thousand times no. It's pretty obvious big targets would be on top of this. But given the severity of this bug you need to get to the lazy sysadmin, to the small ecommerce owner that doesn't have an on site admin, etc.


Small e-commerce sites and lazy sysadmins are probably running such old, outdated versions of OpenSSL that they aren't vulnerable to this bug anyway.


Of all the problems we have, server-side adoption in the face of serious security flaws is not one of them.

The marketing is just confusing people, and patio11's advocacy for more irresponsible marketing-focused disclosure is self-promotional, ambulance chasing, and irresponsible in the extreme.

Small ecommerce owner gets OpenSSL from their hosting vendor. Lazy sysadmin is running 'apt-get update' && 'apt-get install', and if he's not, there are 50 other serious vulnerabilities he's open to anyway.


I just worry next time when a major incident occurs the author will spend more time working on the design than just announcing the issue.


At that point, speed isn't really the issue yet. Heartbleed was in the wild for two years. Would a day or two have made much difference? Highly unlikely.

Speed matters after the disclosure, when every petty criminal and script kiddy in the world is suddenly empowered.


I agree with you, but what about the people that knew about this before hand? The article references cloud flair, on their blog it says that they knew about this before the rest of us, who is to say those individuals are not bad guys??


The first thing I thought about this whole thing when I saw the name was "this is a great name for this bug, and will help ensure everyone hears about it - and panics, which is the goal". I think the logo helped amplify that, so great work by the people who thought this up.


Also, hats off to the heartbleed.com keepers, Codenomicon, for handling this very selflessly - despite this (fuzzing) being their core business and having found the bug itself. They could have made it a "company logo first" marketing campaign.


Maybe they could start naming them like they name hurricanes in addition to the CVE number.


Excellent writeup but as long as the subject is marketing and memorability in names (and in particular domain names) kalzeumus (or is it kalzumeus) isn't the easiest name to remember for a blog or business.

And it lends itself to many typos which is one of my areas of expertise along with branding. I can't easily tell someone "just go to kal zum e us dot com" like I can "heart bleed" (which by the way has a typo that would leak in high volume traffic to "blead" a bit).

Other than that I agree with what Patrick is saying, although I did find the use of "heartbleed" with something also referred to as "heartbeat" (which of course wouldn't be available as a domain name) a bit confusing at first.


I agree with the principle; the logo even made the NYT, which had at least three stories on Heartbleed.

But: are there enough two-english-word combinations left as viable .com names, much less ones that accurately describe the vulnerability?


A .bug TLD may actually work here.


Why not, we already have .coffee .florist and .dating - we should just enumerate the OED for TLDs.


Don't overdo it either. There's plenty of landing pages with non-existing services, no need for crazy project pages where the projects themselves will die soon out of interest or are just subpar.

In this specific case, I would prefer resources spent to make the OpenSSL library itself better instead of the https://www.openssl.org/ domain better.

That being said I agree with the article and love how http://heartbleed.com/ was done.


Talking about marketing: Wouldn't this be a great time for one of the not so small IT companies to pull off a publicity stunt within the tech community and donate a few full time developers to improve the openssl codebase?

For example I might not like Facebook, but if they'd actually make such a contribution to the public good I'd always have to include that counter argument in my criticism.

Maybe some one here on hackernews might be able to pull some strings?


    > Man, would that have been an easier month if
    > we had all been talking about DeserialKiller.
Cereal Thief (I like a bit of whimsy; and as a child, it was serious :-)

Serial Killer (Yeah, drops the "De", but more people will associate with it, and it's easier to parse and pronounce.)


The one weak point of the landing page is that it didn't indicate who was not affected. I read to the bottom of the announcement and had to think a while on whether I had to update my laptop because, hey, this seems like a serious bug. Granted, I'm nontechnical... but that's kind of the point.

Edit: not sure why this was downvoted, but if it contains an error please add a comment pointing it out. If you just think it should be lower on the page, no worries.


Bugs should be named after shitty politicians. Especially those which oppose or act against net neutrality.


Apple's GOTO FAIL certainly also had a catchy name.


I'm not sure how big a part the name and branding, per se, played in the wide reaction to this vulnerability. I would argue that people reacted because they knew it was incredibly serious, impacting almost every site out there. Further a lot of the reaction was by security and infrastructure people and organizations who themselves were impacted and vulnerable, despite every best practice.

In contrast to OpenSSL, the YAML vulnerability was just a very minor blip of importance.


Ironic that the blog talking about this is a rather boring looking site that I've just navigated away from as soon as I got the gist. Not meaning to be hash but that's what I did...


Mission accomplished. does a little jig


Not really sure how it's possible to hang out on HN and not know who patio11/Patrick/Kalzumeus is...


I've seen people not know who "pg" is, so...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: