Hacker News new | past | comments | ask | show | jobs | submit login
OpenSSL 3.0.7 fixes X.509 email address buffer overflows (openssl.org)
586 points by petecooper on Nov 1, 2022 | hide | past | favorite | 226 comments



The web pages and git aren't updated yet, here is the vulnerability straight from the .tar.gz on their FTP:

Fixed two buffer overflows in punycode decoding functions. A buffer overrun can be triggered in X.509 certificate verification, specifically in name constraint checking. Note that this occurs after certificate chain signature verification and requires either a CA to have signed the malicious certificate or for the application to continue certificate verification despite failure to construct a path to a trusted issuer.

In a TLS client, this can be triggered by connecting to a malicious server. In a TLS server, this can be triggered if the server requests client authentication and a malicious client connects.

An attacker can craft a malicious email address to overflow an arbitrary number of bytes containing the . character (decimal 46) on the stack. This buffer overflow could result in a crash (causing a denial of service). ([CVE-2022-3786])

An attacker can craft a malicious email address to overflow four attacker-controlled bytes on the stack. This buffer overflow could result in a crash (causing a denial of service) or potentially remote code execution depending on stack layout for any given platform/compiler. ([CVE-2022-3602])

-----------------------------------

Doesn't sound that critical to me. CAs normally don't let you outright construct your own certificate, and I'd expect you'll have a hard time to get a certificate issued which is both for mail encryption (so you get an email name constraint) and TLS (SAN constraint). And servers without TLS client authentication, which is about 99.99 % of them, aren't affected. TLS client auth is usually only used in enterprise networks and typically terminated by middleboxes running ancient software anyway.


>Doesn't sound that critical to me.

https://www.openssl.org/blog/blog/2022/11/01/email-address-o... has

Q: The 3.0.7 release was announced as fixing a CRITICAL vulnerability, but CVE-2022-3786 and CVE-2022-3602 are both HIGH. What happened to the CRITICAL vulnerability?

A: CVE-2022-3602 was originally assessed by the OpenSSL project as CRITICAL as it is an arbitrary 4-byte stack buffer overflow, and such vulnerabilities may lead to remote code execution (RCE).

During the week of prenotification, several organisations performed testing and gave us feedback on the issue, looking at the technical details of the overflow and stack layout on common architectures and platforms.

Firstly, we had reports that on certain Linux distributions the stack layout was such that the 4 bytes overwrote an adjacent buffer that was yet to be used and therefore there was no crash or ability to cause remote code execution.

Secondly, many modern platforms implement stack overflow protections which would mitigate against the risk of remote code execution and usually lead to a crash instead.


That sounds bananas. It's okay to have a little stack overflow as a treat?

So for some people, this still is a critical vuln. Is there a list of people for whom this is still urgent?


> It's okay to have a little stack overflow as a treat?

They're still doing a CVE and publishing a fix, they're just calling it HIGH and not CRITICAL (definition at https://www.openssl.org/policies/general/security-policy.htm...). More on their categorization, from when they decided to break CRITICAL out of HIGH: https://www.openssl.org/blog/blog/2015/09/28/critical-securi...


I dunno, your criteria sounds too broad. I think there's absolutely value to the user in having vulnerability disclosures distinguish the case of "exploit exists" from "no exploit exists". It's all well and good to just advise everyone to always update their systems, but in practice people managing systems need to deal with gray areas.

Do you shut systems down proactively? Revoke certs for hosts running the old version? Change network access protocols to require the new version? All those decisions have measurable costs, and in practice "we believe this is unexploitable on the OS version you are running" is going to change the decision for some users.


If your stack overflow only allows user input to overwrite other unvalidated user input, then yes, it's "ok". Sometimes you get lucky. Of course it's a bug that should still be fixed, but if it's not an exploitable vulnerability then it's not an exploitable vulnerability. (It's not exploitable for ad-hoc rather than systematic reasons, but anyone who's still using C in this day and age is already only avoiding exploitable vulnerabilities through ad-hoc rather than systematic reasons, so that doesn't seem like a reason to rush out a fix).


I don't understand, were you planning to patch the CRITICAL vulnerability but now that it's only HIGH you have dropped the patch? It's nice to have a scale for these things but it also doesn't fucking matter.


> were you planning to patch the CRITICAL vulnerability but now that it's only HIGH you have dropped the patch?

i mean it’s not far from the truth. do i wake up early on release day and rush out a manual patch, or do i do nothing and observe it to be fixed 1-2 days from now when my OS vendor ships an updated openssl in the course of business-as-usual?

what kind of environment are you in where you’re keeping up with CVEs but don’t care about their categorization?


Their initial assessment was CRITICAL at the point of announcement. Turned out to be only HIGH. Aren't we lucky or what?

Would I have preferred to not deal with it on this holiday? Yes. But I prefer to be safe than sorry, and the OpenSSL team is doing their best to help us be to safe.

When was the last time you had to deal with such an issue, where a HIGH vuln was announced as a CRITICAL one?


Why would I patch this in an environment where I only trust certificates that I control and distribute? And in environments where TLS session handling is done in a process with stack cookies (assuming I verify there's a cookie on this function)?

The reason to have these descriptions is to assess whether this threat applies to you. If I were in another position I would care a lot, but I'm not.

I'll likely patch anyways because this bug may end up being a useful primitive in a larger chain of bugs (kinda doubt it tho tbh), and for me patching is not hard, but at scale, given this threat model, I would not necessarily trigger an out of cycle patch.


Patching things isn't free. If it's only HIGH and not CRITICAL, then users really are going to spend their time on other things and leave things unpatched.


It goes both ways. If engineers are so busy they can't patch HIGH vulnerabilities, and a lot of CRITICAL vulnerabilities come in that don't impact them, they might decide they're to busy to patch CRITICAL vulnerabilities too.


There’s enough places where I would hold off on patching critical vulnerabilities as well. Practically all vulnerabilities require some specific to be in place to be exploitable. It’s up to the engineers to determine whether they are affected. Heartbleed for example was ranked as critical, but if your public SSL sessions terminated at a server running some other TLS stack, you could hold off with patching and roll it into the general patch cycle. Why should you invest time into an out of cycle patch for no gain other than the fuzzy feeling of having patched.


Well... I am not in favor of fine grained severities, personally, but that's what I'm asking. Where in the patch response decision tree do I plug in HIGH vs CRIT? For people who do play this game, now that this vuln is apparently both CRIT or HIGH depending, how do I know which value to plug into my response decider?


Do you actually want this spelled out for you?

* CRIT - Wake an engineer up at 3 AM to apply the patch.

* HIGH - Apply the patch in the next 24 hours.

* MEDIUM - Apply the patch in the next week.

* LOW - Apply the patch in the next month.

I'm making these time frames up, but I'm doing it to illustrate a point: different vulnerabilities have different priority levels. If you consider EVERY VULNERABILITY IN EVERY PACKAGE TO BE AN EMERGENCY then your on-call engineers aren't going to get a lot of sleep, and they'll probably find work somewhere less alarmist. Library maintainers like OpenSSL provide these severity levels to assist you in that prioritization game. If you don't trust them, which might be warranted in some systems, that's fine; you can always read every single CVE for every package yourself (or pay someone to do so).


More realistically:

* CRIT - The CEO called my boss and asked if we were patched yet, so I will be working 16 hour days until it's patched, even if the vuln can't be exploited

* HIGH - My boss told his bosses we have a deadline of other work to meet, so I have until the end of next week

* MEDIUM - Put it in the backlog

* LOW - We will upgrade or sunset the product before it gets patched


Nice that you have a CEO who is has ever heard of OpenSSL much less is monitoring patch advisory notices.


They don't, but sometimes a big vuln makes it into The New York Times or something and they get panicky


When the json thing came out I think it was late night Thursday on HN. I’d checked and patched our applicable servers on Friday and carried on

The panic set in over the weekend as managers decided to issue major panic stations, bronze and silver bridge calls, etc - after it had reached the more general press.

Told them to f-off, public facing servers had been automatically patched, most of the rest manually done and triaged.


My experience playing strategy games (like Go) is that, it's not always about plugging in numbers like that, at least for human decision makers. Mainly:

- There are limited resources (time and money)

- Which of the other seemingly urgent things also need to be done?

If you don't differentiate between finer-grained severities, then you won't be able to differentiate or triage between existential threats, and something that can hurt and be ok. And probably more controversially for people who don't know Go, sometimes you accept losses in order to capture greater gains elsewhere.

No action will always guarantee a risk-free decision, so this is about managing or mitigating risk, rather than become risk-free. There are the risks that are statutory, and therefore requires compliance in order to stay legal. And then there are the risks that are not legal, and depends upon an organization's appetite for risk. Risk is something that is always present in some form (we don't have perfect-information for every decision we want to make, let alone know all of the available choices), so it is up to each individual organization and individual to make. Being able to differentiate between severity allows an organization to weigh risk against cost, time, and opportunity.

And if we want to eliminate this whole class of problems (like stack overflow), we could also look at using something like rust instead.

And that's also not getting into nation-state actors sabatoging standards so that vulnerabilities in OpenSSL keep popping up.

In this particular case, the response my team is doing is inventorying our existing systems to find anything using OpenSSL 3.0.x, and therefore vulnerable. So far, all the systems we have found are using OpenSSL 1.1.1 ... as is probably the case for most organizations.


They are both getting patched. They only downgraded the CRITICAL designation to a HIGH after further testing etc. They are still fixing it.


Right!? Just because several people have been unable to exploit this stack overflow in a week, doesn't prove the flaw is not exploitable.


If you remember when heartbleed first was announced, cloudflare put up a vulnerable server and challenged the internet saying they did not think it was possible to exploit it to exfiltrate data. They were quickly proven wrong.


I think that's significantly different because that heartbleed gave attackers potentially sensitive information as opposed to just crashing.


“Just crashing” is a premise; it hasn't been proven. These are buffer and stack overflows, so I feel some skepticism is warranted.


Sure, but a vulnerability that out of the box gives attackers potentially sensitive data is a much easier target.


Thank you. I'd forgotten that, and it's very relevant.


You forgot the /s sarcasm indicator.


I'm being perfectly sincere.


The NixOS update has some details: https://github.com/NixOS/nixpkgs/pull/198999

### Changes between 3.0.6 and 3.0.7 [1 Nov 2022]

* Fixed two buffer overflows in punycode decoding functions.

   A buffer overrun can be triggered in X.509 certificate verification,
   specifically in name constraint checking. Note that this occurs after
   certificate chain signature verification and requires either a CA to
   have signed the malicious certificate or for the application to continue
   certificate verification despite failure to construct a path to a trusted
   issuer.

   In a TLS client, this can be triggered by connecting to a malicious
   server.  In a TLS server, this can be triggered if the server requests
   client authentication and a malicious client connects.

   An attacker can craft a malicious email address to overflow
   an arbitrary number of bytes containing the `.`  character (decimal 46)
   on the stack.  This buffer overflow could result in a crash (causing a
   denial of service).
   ([CVE-2022-3786])

   An attacker can craft a malicious email address to overflow four
   attacker-controlled bytes on the stack.  This buffer overflow could
   result in a crash (causing a denial of service) or potentially remote code
   execution depending on stack layout for any given platform/compiler.
   ([CVE-2022-3602])

   *Paul Dale*


This does not seem to contain any extra information at all?


For context, when the parent comment was posted, the link to the announcements mailing list archive (which was the original link for this article, it seems to have been changed to the blog post since then) was timing out. It's true that the parent comment contains no extra information at all, but that's only if you managed to open the mailing list link.


it has more \r\n


>Doesn't sound that critical to me.

I agree; and it looks like the developers have had a change of heart as this is apparently only being categorized as "high" rather than "critical" severity now.



Malicious email address in what, I wonder.


Certificates, which makes this pretty nasty, because it implies that there's a potential RCE if you can trigger any sort of certificate parsing remotely. (Would sending a TLS client certificate when initiating a HTTPS request do this out of the box?)


Only if the server side accepted a client certificate during the handshake, and then either that certificate had a trust path to a root CA trusted by the server OR the server was not performing trust path validation. I think it's pretty nichey.


I was not able to find whether openssh can be exploited with this CVE by presenting malicious client auth certificate.

would be glad if someone could clarify whether openssh has this vulnerability?


OpenSSH doesn't support X.509 certificates.


Nor does it call any SSL-related function.

OpenSSH only links to OpenSSL for the cryptographic primitives. SSH and SSL are different protocols.


sounds like the X.509 certificate?


For anyone else with "mature" server configurations confused by the OpenSSL version number because you're on 1.1.x and wondering how to translate the patched 3.0.x version number to something that applies to you; it seems that you have nothing to worry about:

> This code was first introduced in OpenSSL 3.0.0. OpenSSL 1.0.2, 1.1.1 and other earlier versions are not affected.

> We did release an update to OpenSSL 1.1.1, namely 1.1.1s, also on 1st November 2022, but this is a bug fix release only and does not include any security fixes.


If anyone's wondering why OpenSSL skipped 2.0.0, see https://www.openssl.org/blog/blog/2018/11/28/version/


I wish I'd continued using a scripted build of OpenSSL 1.1.x on all platforms instead of switching to vcpkg OpenSSL 3.0.x on newer platforms. There's been a lot of drama for no particular gain in our use case.


From their blog [1] the vulnerability "was reported in private to OpenSSL on 17th October 2022 by Polar Bear who was performing an audit of OpenSSL code". OpenSSL 3.0 was released in September 2021.

Shouldn't fuzzing have caught this at some point? I was under the impression that OpenSSL was being fuzz tested constantly since Heartbleed.

[1] https://www.openssl.org/blog/blog/2022/11/01/email-address-o...


Fuzzing is searching the vast input space with various rough heuristics to try to weigh more error prone paths to generate test inputs for. It's incomplete by nature.


It sounds like the fuzzer would have had to use an arbitrary number of bytes containing '.' and the cert would've had to pass chain of trust verification. A fuzzer is only as strong as its implementation.


I hope they're not just fuzzing random bytes, in theory that would take forever to trigger any code other than just rejecting the certificate.


Iirc fuzzers would add randomness and measure occurences of newly entered codepaths and build out from previous randomness to reach more codepaths. However as you noted, in this case the cryptographic verification code would in practice always be hit before any sany progress could be done in fuzzing any deeper.

To catch this they would've needed to make a specific fuzzing setup where the process prepared random certificates that are THEN signed and passed further and make the fuzzer short-circuit anything where the random cerificate failed the signing to actually test verification.


Shouldn't fuzzing have caught this at some point?

Eventually, yes. Unfortunately, fuzzing is usually nondeterministic/pseudorandom so it won't necessarily go down the path that leads to the bug soon enough.


This seems more difficult to find with fuzz tests than Heartbleed. Unit tests of all edge values would have found this. I think the solution is to use better tools and methodology.

The size of the code base is also just daunting and a security risk in itself.


I noticed that LibreSSL posted a patch release (3.6.1) does anyone know if this addresses similar issues?

    We have released LibreSSL 3.6.1, which will be arriving in the
    LibreSSL directory of your local OpenBSD mirror soon.

    It includes the following fixes:

     - Custom verification callbacks could cause the X.509 verifier to
       fail to store errors resulting from leaf certificate verification.
         Reported by Ilya Shipitsin.
     - Unbreak ASN.1 indefinite length encoding.
         Reported by Niklas Hallqvist.
     - Fix endian detection on macOS
         Reported by jiegec on Github


LibreSSL is not affected by this. It was forked before this vulnerability was introduced into OpenSSL, i.e. it's a new bug.


Libre should delete the code that support X509 email addresses.


No, these fixes look unrelated.


OpenSSL blog post: https://www.openssl.org/blog/blog/2022/11/01/email-address-o...

Q: Are all applications using OpenSSL 3.0 vulnerable by default?

A: Any OpenSSL 3.0 application that verifies X.509 certificates received from untrusted sources should be considered vulnerable. This includes TLS clients, and TLS servers that are configured to use TLS client authentication.

Q: Are there any mitigations until I can upgrade?

A: Users operating TLS servers may consider disabling TLS client authentication, if it is being used, until fixes are applied.


Q: Are there any mitigations until I can upgrade?

A: Users operating TLS servers may consider disabling TLS client authentication, if it is being used, until fixes are applied.

Paraphrasing... A: if you depend on mutual TLS; no.


Well, it also seems to require that you completely ignore the trust chain? Am I reading that right?

> Note that this occurs after certificate chain signature verification and requires either a CA to have signed the malicious certificate or for the application to continue certificate verification despite failure to construct a path to a trusted issuer.


The point is the vulnerable code will execute any time a trusted certificate is being validated. If you have a service that depends on mTLS, then your service is (by definition) validating client certificates so you can't mitigate your exposure.

There is a separable question of whether your service trusts certificates issued by a CA that might produce a certificate with a SAN of the necessary form to trigger the exploit.


If you rely on mTLS you highly likely run your own CA. You can probably say with confidence that no malicious certificates have been signed by your CA. So you mitigate your exposure by ignoring certs without a valid chain of trust, which you're doing anyway.

Am I missing something?



> An off by one error in the punycode decoder allowed for a single unsigned int overwrite of a buffer which could cause a crash and possible code execution.

https://github.com/openssl/openssl/commit/3b421ebc64c7b52f1b...

A one character change.


Colm MacCárthaigh has a nice writeup on CVE−2022-3602 including steps to reproduce: https://github.com/colmmacc/CVE-2022-3602


No RCE on Ubuntu:

    Ubuntu packages are built with stack protector, reducing the
    impact of this CVE from remote code execution to a denial of
    service.
-- https://ubuntu.com/security/CVE-2022-3602


That is a very clever feature. Cheap insurance.


Anywhere I can learn more about what stack protection they have?


https://wiki.ubuntu.com/Security/Features - particularly the stack protector, heap protector, ASLR, PIE, stack clash protection, NX bit, kernel stack protector, kernel ASLR.



Very interesting, thank you


Ubuntu 22.04 ships with OpenSSL 3.0.2. Now that it's November 1, I was expecting to see `openssl` in a `sudo apt upgrade` but it is not there.

Does anyone know when Ubuntu might ship the fix?



After an `apt update` on jammy, `apt changelog openssl` still shows 3.0.2-0ubuntu1.6 for me as the latest from July 4. Do you know if there is something that is holding it back from trickling down?

Edit (28 minutes later): I got the update now. Many thanks to Marc Deslauriers for his work!


Also be aware sometimes fixes are backported, so the version isn't 100% accurate, ymmv etc.


Yes, it gets real confusing because they keep the major version the shipped with but usually add a letter or something to indicate the fix was backported to the older version without upgrading to a newer version number to prevent dependency issues. This got a heated debate on a project I was part of when a community member misunderstood this mechanism.


I _still_ have to go through this every time a clipboard warrior thinks they need to do a "security audit" in order to check a box for some meaningless certification or another.

But of course THEY don't want to run the audit (sounds too much like work!) so they contract the audit out to a third-party that builds a database saying which versions of various software are vulnerable according to these CVEs. Of those, these contractors don't actually understand how Linux distributions are put together and their scanners _always_ flag fully patched and up-to-date machines as being vulnerable to X, Y, and Z.

And then I have to explain that their scanners are broken by virtue of using a cheap-to-get value (the version number) as a totally inadequate proxy for something else (exposure to a vulnerability) when the two have only a weak to no actual correlation in real life. And then they shut up. Until the next audit comes around...


> But of course THEY don't want to run the audit (sounds too much like work!) so they contract the audit out to a third-party that builds a database saying which versions of various software are vulnerable according to these CVEs. Of those, these contractors don't actually understand how Linux distributions are put together and their scanners _always_ flag fully patched and up-to-date machines as being vulnerable to X, Y, and Z.

It depends on which software is used and how scans are done.

For (e.g.) Nessus, if all it does a port/protocol scan, and something like "OpenSSL 3.0.2" is reported in the Apache/web server string, then it is going to get flagged.

But you can set up "authenticated scans" where Nessus can go in as a (non-privileged) user and get a package listing. It then has a list of CVEs for each distro, which distro packages are vulnerable, and in which version the CVE was fixed in: you get a report saying "Package X is vulnerable because you are running Version a.b.c; please install Version a.b.c_foo1 to fix".

Run a yum/apt-get update to pull in the newest package(s) and vulnerability is cleared on the next scan after the _foo1 patched package is running.

The fact that your auditors (a) are using crappy scanning software, (b) do not know how to use it, and/or (c) the scanners cannot / are not allowed to login to get package versions, does not mean that auditing is inherently bad or useless.


Even the best auditors I've seen have crappy software that will flag the Apache Version String even if they're also running on the machine and can identify that it's actually an up-to-date .deb running.


Literally having this discussion at work now. The vulnerability ticket against my service won't close until the vendor database is updated to reflect the security patch backport to Jammy, even though everyone agrees that 3.0.2-0ubuntu1.7 is not vulnerable.

This is in any case notwithstanding the fact that the detection is in the base image of a Docker container from a vendor that confirms they don't use OpenSSL; and that said container runs in a context where the only TLS services it faces off to are run by AWS.


A good definition of "security theater". I understand why the security team acts like that but it is still theater nonetheless.


To be fair, that was the intent of versioning. As an external auditor who is aware of backporting, it’s very difficult to track down and investigate if the version in use has been patched or not - I assume system owners have the same difficulty. If versioning doesn’t clearly articulate patch level, then it’s the patching protocols which are broken, not the auditor.


I'm in agreement. The distros really get in the way of things like this and it's kind of on them to address it.


Distros have different goals from upstream software, and upstreams all have different policies too.

For instance, plenty of upstream software never even releases security fixes in older versions, yet distributions might be committed to supporting them for 5 or 10 years in their LTS releases.

The only universal solution to this is back-porting: it reduces the risk of exposing LTS release customers to backwards compatibility issues, but increases the risk of a bad patch slightly.

If upstreams cared about the things distro customers cared about too (don't break my stuff I haven't changed), it would be much easier to put the blame solely on distros, but they don't.


I don’t think this is an unsolvable issue. Better naming conventions, or some kind of standardised util would do.

$ patchinfo openssl

3.0.6 LTS

Patches:

Cve-2022-xxx

Cve-2022-xxx

Etc


I do think it's unsolvable, since it's not a technical problem, but a societal one: Debian packages already list patched CVEs in their changelogs in a standardized format.

Introducing a new tool will only add another on top of the existing tools, when nothing tells us that people will use it! (plug the xkcd on n standards -> fix with a better standard -> n +1 standards)

Basically, you want to make everyone use a single tool when humans always strive to build something better or just different :)

The best we could hope to achieve is to have the CVE MITRE database start accepting "patched in" strings from all the distributors (so as Ubuntu pushes a signed and patched package to the archive, it pings the CVE db with a new version string for eg. Ubuntu-22.04 namespace).

Even that gets tricky with dependencies and issues covering multiple packages ("this is only patched if you've updated both of these"), though that's a technically solvable problem.


I agree with pretty much everything you said, but possibly a common metadata format (both our separate tools report differently but off the same data, etc).

I also think that you’re probably right we the patched in, it’s not infinitely scalable, but I doubt it needs to be anyway


VMware ship 1.0.2zb at the mo. I wouldn't want to maintain that fork!


One can buy premium support from OpenSSL for 1.0.x and let them supply patches and releases. This is what the company I work for does.

At one point I was managing a fork of OpenSSL 0.9.7 for an OS version and in order to communicate what vulnerabilities were fixed, we appended the list of CVEs to the version string. The line grew to hideous dimensions as you can imagine.


You can find info at https://ubuntu.com/security/cves for *buntu.


I would expect there should be a patch sometime today, but maybe not for a few hours based on prior experience.


openssl packaged as `3.0.2-0ubuntu1.7` fixes the issue. So `>1, <= 3.0.2-0ubuntu1.6` is vulnerable.

If you are using an APT mirror, you might not see the update yet. Consider adding `deb http://archive.ubuntu.com/ubuntu jammy-updates main restricted` to `/etc/apt/sources.list` to get the updated package


> Ubuntu packages are built with stack protector, reducing the impact of this CVE from remote code execution to a denial of service.


I just received the updated packages!



Sounds like this is mostly caught by stack overflow protections. From the release blog:

Firstly, we had reports that on certain Linux distributions the stack layout was such that the 4 bytes overwrote an adjacent buffer that was yet to be used and therefore there was no crash or ability to cause remote code execution.

Secondly, many modern platforms implement stack overflow protections which would mitigate against the risk of remote code execution and usually lead to a crash instead.

However as OpenSSL is distributed as source code we have no way of knowing how every platform and compiler combination has arranged the buffers on the stack and therefore remote code execution may still be possible on some platforms.



There are probably many such vulnerabilities in this giant code base, being exploited by those who have resources to find them.

If OpenSSL is written in Rust, to what extent will the vulnerabilities be reduced (assuming that Rust is supported by the host, of course)?


rustls can serve as an alternative.[1] Dirkjan Ochtman, one of the main contributors, wrote about it in this thread.[2]

[1] https://github.com/rustls/rustls

[2] https://news.ycombinator.com/item?id=33423296


Breaking news: software written in a language with error prone memory management has another security incident related to memory management.

Stay tuned for the next buffer overflow, use after free, and another easily preventable problem. We will run out of CVE numbers soon thanks to OpenSSL.

If there only were well-known solutions to this problem. Like programming languages that restrict what you can do in terms of memory management... Like Rust.

But hey, don't get distracted by automated memory management solutions, we are too busy fixing memory management problems manually.


Yet every time C comes up in a discussion here, someone will insist that it’s fine, that only bad programmers make mistakes.

I don’t know Rust, and I love C, but we’re well past the point where one can reasonably argue that careful use of C is good enough.


I am sure your OpenSSL-like library in Rust with support for using from all other programming languages is coming out any day now?


No need to wait, at least for TLS https://github.com/rustls/rustls-ffi


Thanks, that's honestly very nice! I look forward to hearing about it as it becomes more widely used!


Make fun all you want, bro. The days of juggling chainsaws are coming to an end. The NSA is crying.


Hugops to anyone patching today. I'm currently rolling over-the-air updates to a fleet of hundreds of Raspberry Pis distributed around the world.


Red Hat vulnerability page with a system detection script:

https://access.redhat.com/security/vulnerabilities/RHSB-2022...


FYI: This detection script only runs on RHEL (and presumably variants?), it does a "rpm" and looks for specific packages, not scanning the system looking for the vulnerability.


Yes, this is a tool for users of RHEL systems to know whether or not they have a vulnerable version of OpenSSL on their systems. It's not a general PoC detection script.


Seems like RHEL/Centos/Rocky < 9 aren't affected (OpenSSL is too old to be vulnerable ... makes a change). Not seeing any OpenSSL updates on any of my Rocky 9 boxen just yet.


https://twitter.com/cryptodavidw/status/1587505925731934208'

> Can you imagine that 10 years ago, a paper came out called "The most dangerous code in the world: validating SSL certificates in non-browser software". And today we're still getting X.509 critical vulnerabilities in OpenSSL (https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-...)


woof sympathy to everyone else who joined both the openssl exploit + the rust bound checking stan drinking game for november

rough afternoon ahead


> the bugs were introduced as part of punycode decoding functionality

Did they handroll their own decoder, or did they use the reference code [0] in the RFC?

[0] https://www.rfc-editor.org/rfc/rfc3492#page-23


I suppose this is the "clear your schedule and be prepared to patch the entire world" version?


Lots of distros don't use v3 yet and are not affected.

This is definitely far from Heartbleed level of catastrophe.


Feh. I just upgraded my production and staging machines to Ubuntu 22, which no longer have v1, and which breaks compiling older (but still maintained) versions of Ruby. Everything is still running, but this change caught me flatfooted. I groused about Ubuntu, and someone told me that Fedora has also changed over. You say "lots" of distro's haven't. Which ones? (And, sure, I can already assume Debian stable, since that runs 7 years behind everything else, but what else?)


I'm oversimplifying it a bit, but anything that hasn't reached stable this year is still using v1.1.1 (and therefore unaffected).

Ubuntu v22.04 is vulnerable, but any before it is not. Debian is good (except bookworm which is currently in testing), Fedora (<36) is good, RHEL/CentOS (<9), Arch...

So on top of being not as serious as Heartbleed, servers that are a bit longer in operation (but still well within their support cycle) don't need patching.

https://github.com/NCSC-NL/OpenSSL-2022/tree/main/software

EDIT just to add this quote from their blog post (https://www.openssl.org/blog/blog/2022/11/01/email-address-o...):

> We did release an update to OpenSSL 1.1.1, namely 1.1.1s, also on 1st November 2022, but this is a bug fix release only and does not include any security fixes.


> You say "lots" of distro's haven't. Which ones?

Surprisingly enough, Arch Linux, a rolling release distro, still hasn't. It's a real mixed bag.


Neither has Gentoo, unless you unmasked 3.0 yourself.


This is a good overview of what’s vulnerable:

https://github.com/NCSC-NL/OpenSSL-2022/blob/main/software/R...


In addition to very few distros using OpenSSL 3, your server is only affected if you do client certificate verification, which is exceptionally rare for public internet servers.

As a client, you're only affected if you connect to a malicious server.


Could in theory be utilized to move laterally in networks where client TLS is used for authentication, which I see used sometimes.


OpenSSL 3.x has a fairly small install base


Exactly how panicked should I be right now?


Not terribly; not only is it a hard path to hit (you need the malicious certificate to be issued by a trusted CA) and you have to figure out how to turn a very constrained 4 byte stack buffer overflow into something more powerful. Compiler engineers have been well aware of stack buffer overflows for a long time and so a lot of modern compilers do cheeky things to mitigate these sorts of overflows, ranging from placing these buffers at the bottom of the frame (so a linear overflow doesn't hit anything) to stuff like stack cookies protecting the return address from linear, blind overflows. This isn't to say it's impossible to exploit (as the linked post shows) given some lucky compiler decisions on where other things are placed, but as it stands it's unlikely to be useable as is.


Extremely


You should rarely be extremely panicked. It's not a very useful state of mind.


woah at least a DOS with just a malicious email address and potentially an RCE. yiiiiiikes that's bad


From what I am reading the address needs to be in a certificate you trust. So, then the question is, who is issuing certificates some lunatic wrote nonsense into, but which you trust? In many cases the answer will be "Nobody".


I would approach the issue rom the opposite direction.

What is the minimum failure of any CA required for me to get popped? If some malicious actor goes through the effort to get me, how many other clients/servers on the internet can get from the same CA breach / malicious cert?

How would you know to un-trust a CA you already trust until after the incident (a malicious certificate issued) had already happened? Even if the incident happened, someone still needs to alert you to un-trust the CA involved. That takes time.

History of mistakes / bad decisions by cert authorities: https://sslmate.com/resources/certificate_authority_failures

> who is issuing certificates some lunatic wrote nonsense into, but which you trust

Do you know every CA your OS and each browser trusts out of the box? Have you done an audit of each one of them? Does an audit 100% prevent a breach from happening in the future?

What are the odds that their policies are perfectly followed every time and that no employee in the right place can be bribed/blackmailed?

Remember Cert Authorities are watering holes. Almost all of the internet trusts a few of them. By breaching one (which may be very significant, either technically or reputationally), the payoff can be massive.


> What is the minimum failure of any CA required for me to get popped?

They need to issue an intermediate with this weird constraint, for most CAs that's a major business decision, maybe bringing in auditors to witness the ceremony, and then it's advertised, which seems like we'd go "Er, why does it have this really strange constraint" and the jig is up.

> Do you know every CA your OS and each browser trusts out of the box?

Yes.

> Have you done an audit of each one of them? Does an audit 100% prevent a breach from happening in the future?

No, I have not performed an audit of any public CAs. My observation, over years more and less actively overseeing the Web PKI (sometimes as a contributor to m.s.d.policy) is that the CAs are on the whole not malevolent but they're sometimes incompetent and lazy, like most humans.

I think you're imagining this is an end entity certificate needed, but the end entity certificate isn't interesting here, OpenSSL is (mis)parsing data from an intermediate, the end entity certificate is just a trigger and not an interesting one. I don't see any way you could get such a "trigger" certificate from Let's Encrypt, but I wouldn't expect it's hard to buy something suitable from a for-profit CA, however the problem is that there's no malformed intermediate to act as the explosive for you to detonate.


Yeah, I was expecting heartbleed and this is “denial of service if you manage to sneak a malformed certificate by a CA and it makes into an attack chain”. Other than the sheer number of devices vulnerable, I don’t see this as being that big a deal.


When will this package got updated on GCP? Currently on Ubuntu 22.04 and we're still on 3.0.2 .


Its going to be backport, so its going to be a fixed 3.0.2 not 3.0.7.



Last time we went through this (Heartbleed, I think) it took several hours.


This vulnerability could actually be weponized against botnets that utilize older ssh clients.


OpenSSH doesn't use X.509 certs.


Shouldn't AI be able to track down these security problems in C and C++ code and patch them properly? It kind of seems like if current AI can't solve these problems which are strictly programmatic then what else can we expect them to solve. Or is even this some kind of NP level problem?


Affected (and unaffected) software tracking page shared earlier today:

https://github.com/NCSC-NL/OpenSSL-2022/blob/main/software/R...


(“Affected” defined as “uses OpenSSL 3.x”, not “can be exploited”)


Reminder that rustls exists as a pretty mature TLS implementation in safe Rust (thus systematically avoiding issues like this). Thanks to Brian Smith for creating the webpki crate which was thoroughly engineered from the start to avoid stuff like this.

rustls has C bindings these days: https://github.com/rustls/rustls-ffi

I've started work on Python bindings too, with the idea that it probably wouldn't be crazy hard to do something that can pass as an `ssl.SSLSocket`. Please sponsor me on GitHub if that's something you'd like to use (https://github.com/sponsors/djc).

Note, we're aware that by far the biggest impediment to adopting rustls is the lack of support for IP addresses in certificates (we currently need a DNS name). This work is funded and should be completed in the next few months.


Rustls only supports tls 1.3 and 1.2, as a design choice - so long as cutting older clients off is an acceptable choice, you should be using rustls.


TLS 1.2 has been supported by every major browser since 2014. Using a browser older than that is just simply irresponsible.


There's a reason why IETF deprecated TLS 1.0 and 1.1. So your point is somewhat moot. If anyone's using either of these, they're just waiting to be exploited.


The criticisms that I have heard regarding ring/rustls is that the crypto primitives are implemented in assembly and there is no portable reference implementations that can be used to verify them.

In contrast, EverCrypt is a formally proven TLS implementation. There is a portable implementation of all algorithms that is used to verify the correctness of the assembly implementations.

https://project-everest.github.io/


Work is ongoing to use FiatCrypto-based implementations for the primitives, which is discussed a bit here:

https://www.crowdsupply.com/sutajio-kosagi/precursor/updates...


That is great to see and a welcome change.

Also interesting to see the Precursor project getting so deep in the weeds! I saw that project on crowd supply months ago and thought it was notable.


What is performance like? We had to move off libressl because of performance, which makes me sad.

In particular, does it support all the common crypto accelerator CPU instructions? How does it fare on the microbenchmarks the various openssl forks ship?


There are fairly comprehensive measurements from 2019:

https://jbp.io/2019/07/01/rustls-vs-openssl-performance.html

I'm pretty sure current versions of rustls are faster than the ones from 2019, but I don't have an intuition for how OpenSSL performance has evolved in the past three years.

I'd like to do another comparison some time soon.

(Yes, the underlying ring crypto library should take advantages of specific instructions available on common CPU architectures.)


Where's the language spec? The reference is nice but it's not a formal specification.

So much undefined behavior - more than C! /s

Edited to reflect sarcasm as rustaceans apparently can't infer it.


In my understanding it would be hard to make the case that Rust actually has more undefined behavior than C -- most kinds of undefined behavior in C have been carefully avoided in Rust, although, yes, there is no piece of paper ratified by a bunch of national technology institutes that describes Rust.

See also this recent blog post:

https://blog.m-ou.se/rust-standard/


While the rust community might not believe their language needs a specification, some of their would-be customers have business requirements surrounding a specification that cannot be fulfilled with a reference.

edit: clarity


A standard or a specification? Anyway, hopefully Ferrocene will be able to provide those folks what they need.

https://ferrous-systems.com/ferrocene/


Thank you for catching this mistake on my part, fixed.


Can you explain what the difference is between a standard and a reference?


See edit


Of course you'd expect a formal specification to include details on what the fundamental data types actually are - C11 can't be nailed down on what char is, it defines it as a type large enough to store a member of the implementation-defined basic character set with implementation defined signedness. Which means you could have a valid implementation with anything from a 4-bit unsigned char to a 256 bit or larger signed char


> Which means you could have a valid implementation with anything from a 4-bit unsigned char to a 256 bit or larger signed char

This is not quite accurate; there is no upper limit to the number of bits in a char (except for INTMAX_MAX, perhaps), but it cannot have any fewer than 8 bits, including a possible sign bit.


All the C types have had minimum allowed ranges going back to ANSI C.

For 'char', it must at least support the range 0 to 127 (and in addition, it must have the same range as either 'unsigned char' or 'signed char').


     * Fixed two buffer overflows in punycode decoding functions.

       A buffer overrun can be triggered in X.509 certificate verification,
       specifically in name constraint checking. Note that this occurs after
       certificate chain signature verification and requires either a CA to
       have signed the malicious certificate or for the application to continue
       certificate verification despite failure to construct a path to a trusted
       issuer.

       In a TLS client, this can be triggered by connecting to a malicious
       server.  In a TLS server, this can be triggered if the server requests
       client authentication and a malicious client connects.

       An attacker can craft a malicious email address to overflow
       an arbitrary number of bytes containing the `.`  character (decimal 46)
       on the stack.  This buffer overflow could result in a crash (causing a
       denial of service).
       ([CVE-2022-3786])

       An attacker can craft a malicious email address to overflow four
       attacker-controlled bytes on the stack.  This buffer overflow could
       result in a crash (causing a denial of service) or potentially remote code
       execution depending on stack layout for any given platform/compiler.
       ([CVE-2022-3602])


Apparently the "Critical" vulnerability has been downgraded to "High" since the annoucement: https://www.openssl.org/news/vulnerabilities.html


Huh. I wonder why they decided to walk it back _after_ disclosure.


This is answered in their blog entry: https://www.openssl.org/blog/blog/2022/11/01/email-address-o...

  A: CVE-2022-3602 was originally assessed by the OpenSSL project as CRITICAL as 
  it is an arbitrary 4-byte stack buffer overflow, and such vulnerabilities may 
  lead to remote code execution (RCE).

  During the week of prenotification, several organisations performed testing 
  and gave us feedback on the issue, looking at the technical details of the 
  overflow and stack layout on common architectures and platforms.

  Firstly, we had reports that on certain Linux distributions the stack layout 
  was such that the 4 bytes overwrote an adjacent buffer that was yet to be used 
  and therefore there was no crash or ability to cause remote code execution.

  Secondly, many modern platforms implement stack overflow protections which 
  would mitigate against the risk of remote code execution and usually lead to a 
  crash instead.

  However as OpenSSL is distributed as source code we have no way of knowing how 
  every platform and compiler combination has arranged the buffers on the stack 
  and therefore remote code execution may still be possible on some platforms.

  Our security policy states that a vulnerability might be described as CRITICAL 
  if “remote code execution is considered likely in common situations”. We no 
  longer felt that this rating applied to CVE-2022-3602 and therefore it was 
  downgraded on 1st November 2022 before being released to HIGH.


So, back in early 2000's, it was common knowledge that some US gov't agency (NSA?) had managed to get a saboteur in to the SSL standardization committee.

Once they were outed, it was clear that all they did was push as much complexity as possible into the spec, ensuring a steady stream of vulnerabilities like this.

For instance, instead of hardcoding things that are definitely fine, they would push through a configuration knob, or champion obscure extensions to the wire protocol in the name of "generality". Whenever someone else proposed a compliciation, they would fast track it as much as possible, and simplifications were black holed.

I can't find a reference anywhere. Does anyone know what I'm (mis)remembering?




There's some more discussion of ipsec in the thread https://www.metzdowd.com/pipermail/cryptography/2020-July/03...

I'd be interested to see references to SSL too (from a historical perspective)



Specifically https://gist.github.com/FiloSottile/611fc3fa95c3aceebf258098...

    -    if (written_out > max_out)
    +    if (written_out >= max_out)
             return 0;
Plus a second, less memeable fix.


I don't understand this one:

     -                (written_out - i) * sizeof *pDecoded);
     +                (written_out - i) * sizeof(*pDecoded));
Edit: just to be clear-- I'm claiming there's no way the addition of parentheses could change the behavior of the program.

I guess I can understand resolving an ambiguity so that the reader doesn't have to look up precedence rules. But when combined with a high-profile bug in a critical piece of infrastructure it looks a bit like the software equivalent of not shaving one's beard in the hopes of winning a sports tournament.


Looks to me like a coding style fix along for the ride with the correctness fix.


I don't think that's security-relevant, likely just keeping a linter happy.


Ah, I didn't think about a linter. I'll take a look at the code later to see if that's the case.


sizeof is a keyword because of some terrible reason, but I'd parenthesize it just to make the intention clear.


Why is sizeof a terrible keyword? Just think of it as an operator rather than a function.


Should have been a callable, or should have had consistent syntax to prevent people from being clever.


Agreed. sizeof FEELS like a function, and so I use parentheses with it, even though I know it's not actually a function.


Wasn't there some subtle difference between using sizeof with vs without parens? Something like if you pass a type to sizeof -- e.g. sizeof(*int) -- you must use parents anyway? It's been years since I've used C, I'm probably not remembering it correctly.


Yes, exactly that. `sizeof(int)` OK, `sizeof(i)` OK, `sizeof i` OK, `sizeof int` not OK. One of those weird rules.


From the variable names, it is clear that the old version was correct. Someone should revert this. :-)


[flagged]


I don't think it is fair to blame the language for this.

Look at the lines above the vulnerability:

  n = n + i / (written_out + 1);
  i %= (written_out + 1);
Whatever buffer manipulation this is doing could be abstracted out into a library. (As studio.h does.)

Also, since written_out probably varies from loop iteration to iteration, what does "n remainder i" even mean?

Also, why don't they increment written_out before doing the arithmetic, and why do they increment i on line 523, when the only intermediate use of it is "i + 1"?

I've written some crappy code in my time, but when things descended into that level of insanity, I either delegated to fuzz tested functions, or at least left a comment.


You’re not wrong.

Back in the day, you didn’t have alternatives with zero-cost abstractions like we have today. There are many valid reasons why one would make decisions to balance performance against memory-safety, especially 25 years ago – and given the fact that networking code tends to be part of hot paths where optimization matters.


To be fair, OpenSSL was first released in 1998. We didn't have fast memory-safe languages back then. Java existed, but was incredibly slow.


The function in question was written in 2020. OpenSSL 3 is an incompatible major rewrite undertaken during 2018-2021 and there is no no valid excuse for why language quality was not improved at that time.


> OpenSSL 3 is an incompatible major rewrite

Oh snap I didn't know that, and I probably should have. TIL.

Then yeah, you're right. It probably should have been written in Rust.


come on, that is quite harsh. c is easily my favorite programming language and it just works really damn well and produces very small binaries. it's easy to write terrible code in any programming language.


> c is easily my favorite programming language

That's not really relevant to the discussion of whether it's the proper language to be used for something like OpenSSL.

> it just works really damn well

As far as I can tell, all mainstream programming languages "work well", so it's not clear what you mean.

> produces very small binaries

True but not very important IMO

> it's easy to write terrible code in any programming language

Sure. But it's not easy to write this particular flavor of terrible code -- buffer overflows -- in any commonly-used language other than C and its derivatives.


C++ generally produces smaller and faster programs because the language gives compilers much more to work with.


> Q: Is this a branded vulnerability?

> A: The OpenSSL project has not named or created logos for either CVE. The best way to refer to them is via the CVE names to avoid confusion.

I chuckled a bit, but im also sad that we ended up in this situation. The obsession with brands is, to some extent, unhealthy.


No, it's not. Branded vulnerabilities are memorable. CVE numbers aren't.


Only for major problems, and even then it seems iffy. I have trouble remembering which specific issue Meltdown actually refers to a couple years on.

Personally, I'm totally fine with CVEs. (And if people to stop acting like QIDs are interchangeable, that would be great, too.)


You would have no connection to the underlying vulnerability whatsoever if I just mentioned CVE-2017-5754 to you.


No, but I can establish context in about 10 seconds, and then it doesn't matter.

And you won't remember what WhizzyBadBug refers to in 5 years, you'll have to look it up. It'll only take you 10 seconds or so.


I don't know. The example you chose was Meltdown. I think most people in the field heard Meltdown and can at least place it into the right bucket, of microarchitectural attacks. But more to the point: Meltdown in the moment was far more useful than a CVE number.


Without looking: Struts !!!! But not 100% sure I got it right.

Update: And I'm completely wrong. :-)


It's not a real critical vulnerability unless it has a marketable name and a its own website. Technical names are boring.

While it's cool to talk about heartbleed and people knowing what it was (it even had its own XKCD comic), I fear this marketing trend is detrimental because it's not really aimed at the people who apply the patches/mitigations.


The branding does not help the patching, its for self-promotion of security people. Its incentives go into the wrong direction.


Since they declined to name it, we get to name it for them, right? Let's call it CertHurt.


RCE-mail? Hopefully someone can work "Cert" in, but all my attempts broke up the meter of saying it

Of course, that's cheating a little since we now know that it's not _automatically_ an RCE


NCSC is calling it SpookySSL but I think it is just for funsies. https://github.com/NCSC-NL/OpenSSL-2022


Maybe we should set up a naming system just like for COVID variants


It probably serves them better that they haven't. It'd turn it into a meme like Heartbleed that would not be forgotten for the next decade.


This is nowhere near as severe as Heartbleed. That's why it's not considered Critical (and also why it hasn't been given a special name.)


>Q: Is this a branded vulnerability?

>A: The OpenSSL project has not named or created logos for either CVE. The best way to refer to them is via the CVE names to avoid confusion.

"I survived CVE-2022-3786 & CVE-2022-3602 and all I got was this crappy t-shirt."


Is this exploitable ? With current mitigations (NX, Stack Canary, ASLR), I don't see how a buffer overflow on it's own could result in Remote Code Execution.


Not every systems has the necessary mitigations enabled.


The most surprising to me is that NodeJS says they are affected https://nodejs.org/en/blog/vulnerability/openssl-november-20...


The offical NodeJS binaries statically link to OpenSSL so they have the be patched explicitly for OpenSSL vulnerabilities.


They're only affected in the sense that newer versions of Node use OpenSSL 3.x.

> Node.js v18.x and v19.x use OpenSSL v3. Therefore these release lines are impacted by this update.

> Node.js 14.x and v16.x are not affected by this OpenSSL update.

> At this stage, due to embargo, the exact nature of these defects is uncertain as well as the impact they will have on Node.js users.

> After assessing the impact on Node.js, it will be decided whether the issues fixed require immediate security releases of Node.js, or whether they can be included in the normally scheduled updates.


Might be more surprising to you that it looks like you're shadowbanned with all your comments showing up dead (last 2 were vouched). I glanced at your post history and couldn't see why so you might want to send an email to hn@ycombinator.com.


I wonder if they are built with stack protector turned on in the compiler flags.


Url changed from https://mta.openssl.org/pipermail/openssl-announce/2022-Nove... to a post with more background.

(via https://news.ycombinator.com/item?id=33423271, but we merged that thread hither)


If you are tired of the frequency of CVEs in OpenSSL, consider sending a formal complaint to maintainers@openssl.org\0hunter2


It'd be more productive to switch to libressl or bearssl.


[deleted]


It's really better to just wait for the thing you want to be posted to actually appear on the web instead of using 'placeholders' or announcements with incomplete info. For HN purposes, at least, it's ok to wait since it's not a news ticker despite the word in the name.


You're right. I'll delete it.


Would have been better to wait for something like the thing linked here https://news.ycombinator.com/item?id=33423271 for the submission in general

Otherwise moderators have to run around cleaning stuff up, fixing titles, merging threads, posting notes about comments that make less sense due to merge, etc.

Sorry this sounds so beratey, just worth keeping in mind next time.


You're absolutely right. And I didn't take it as beratey, no drama.


[dead]


There's a certain poetic irony in my opinion that to investigate whether you're vulnerable to a possible RCE CVE, you can curl directly into a shell.


Yeah that gave me pause.


RCE.


[flagged]


The DoJ security assessment about Multics refers to PL/I bounds checking capabilities as why it got a better evaluation mark than UNIX....


quick question, I didn't look into the detail of the issue and novice on Rust as well - question is to whom already checked detail of the vulnerability, is this bug kind of ones we can prevent if we're using Rust instead of C?


Bog standard buffer overflow caused by incorrect bounds checking. Yes.


Indeed. For illustration, the Ubuntu commits that fix the two CVEs:

https://git.launchpad.net/ubuntu/+source/openssl/commit/?h=a...

  -        if (written_out > max_out)
  +        if (written_out >= max_out)

  [...]
https://git.launchpad.net/ubuntu/+source/openssl/commit/?id=...

  -            if (tmpptr != NULL)
  -                PUSHC('.');
  +            PUSHC(tmpptr != NULL ? '.' : '\0');

  -    char a_ulabel[LABEL_BUF_SIZE];
  +    char a_ulabel[LABEL_BUF_SIZE + 1];
https://git.launchpad.net/ubuntu/+source/openssl/commit/?id=...

  -            || type->origin == EVP_ORIG_METH) {
  +            || (type != NULL && type->origin == EVP_ORIG_METH)
  +            || (type == NULL && ctx->digest != NULL
  +                             && ctx->digest->origin == EVP_ORIG_METH)) {

  -            || impl != NULL) {
  +            || impl != NULL
  +            || (cipher != NULL && cipher->origin == EVP_ORIG_METH)
  +            || (cipher == NULL && ctx->cipher != NULL
  +                               && ctx->cipher->origin == EVP_ORIG_METH)) {




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: