Hacker News new | past | comments | ask | show | jobs | submit login

Nice move!

I was already wondering, though: might Mehta and/or Codenomicon have been put on the scent of the Heartbleed bug by an inquiry from one of the journalists with Snowden docs?

I hope we hear more about how each of the researchers found the bug – code auditing? fuzzing? observing attempted exploits? etc – and in the same general timeframe.




No. Neel found it by auditing code. See CVE-2011-0014 for a previous OpenSSL bug he found by auditing code. See CVE-2010-0239 to witness his awesome ability to find bugs by auditing assembly code.


Thanks! Do you know this because you work with him and are relaying his report of that approach in this particular case, or do you just expect that's the case from his usual technique and prior cases?

Wouldn't that track record of awesome ability, and his support for the Freedom of the Press Foundation, also potentially make him a sought-after expert about mysterious things that a journalist didn't understand in a leaked NSA (or other) document?

Do you know if the Codenomicon researcher(s) found the issue in the same way? (Their corporate webpage makes a big deal about their fuzzing tools.)

Do you believe it was just coincidence that both recently found this same monumental vulnerability? How long would Mehta and/or Codenomicon typically privately research such a bug, with or without coordination with OpenSSL maintainers, after first confirmation of danger?


You're welcome! I know this because I work on the same team with him. I'm not familiar with what Codenomicon did and won't publicly speculate.


Not to diminish Neel's work (nobody will deny that he was already known as being at the top of his game for the last decade or more), but this particular one was blatantly obvious for anyone who audits C code. This just tells us that nobody bothered to look at it for 2 years. If a client had given me this code to audit, I'd have found it immediately. If you were hiring for an entry level auditing type position, or an intern, and they did not spot this, you would not want to hire that person.

For me personally, one of the biggest reasons why I wouldn't have looked is because there is a perception that the probability of things this dumb being missed means looking is surely a waste of time that could be billed hourly to paying clients. It's moments like these that really challenge those assumptions.


I'm trying to understand how two (independent?) researchers happened to both find such a 'blatantly obvious' bug in the same time window. Both of them noticing exploitation in the wild, or both being prompted to give a fresh look by an outside inquiry, seem possible.

But I suppose the most likely possibility is that the Apple and GnuTLS bugs earlier this year made many groups begin a deeper look at trusted-but-not-really-well-reviewed code. OpenSSL fits that category, and with many (?) teams now combing it, discovery collisions during the window of time between "1st realization" and "released fix" are more likely.

Still, if doing a fresh & deep look at OpenSSL, and finding something of this magnitude: would a researcher race to both inform OpenSSL and deploy a fix ASAP, or would they instead, suspecting there's much more low-hanging-fruit, try to review further before disclosure and batch things up so the eventual shock/pain can be handled in one internet-wide emergency, not many?

Which is a roundabout way of asking: are we about to get a bunch of these trickled out as Mehta, Codenomicon, & others continue to work through the code, or did we just get this because that review process completed?


Well, we know now that Chrome is switching to OpenSSL, so that's likely what prompted Mehta's discovery. I have no idea who Codenomicon are or why they were looking though.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: