The bug is simple: like a lot of number-theoretic asymmetric cryptography, the core of ECDSA is algebra on large numbers modulo some prime. Algebra in this setting works for the most part like the algebra you learned in 9th grade; in particular, zero times any algebraic expression is zero. An ECDSA signature is a pair of large numbers (r, s) (r is the x-coordinate of a randomly selected curve point based on the infamous ECDSA nonce; s is the signature proof that combines x, the hash of the message, and the secret key). The bug is that Java 15+ ECDSA accepts (0, 0).
For the same bug in a simpler setting, just consider finite field Diffie Hellman, where we agree on a generator G and a prime P, Alice's secret key is `a mod P` and her public key is `G^a mod P`; I do the same with B. Our shared secret is `A^b mod P` or `B^a mod P`. If Alice (or a MITM) sends 0 (or 0 mod P) in place of A, then they know what the result is regardless of anything else: it's zero. The same bug recurs in SRP (which is sort of a flavor of DH) and protocols like it (but much worse, because Alice is proving that she knows a key and has an incentive to send zero).
The math in ECDSA is more convoluted but not much more; the kernel of ECDSA signature verification is extracting the `r` embedded into `s` and comparing it to the presented `r`; if `r` and `s` are both zero, that comparison will always pass.
It is much easier to mess up asymmetric cryptography than it is to mess up most conventional symmetric cryptography, which is a reason to avoid asymmetric cryptography when you don't absolutely need it. This is a devastating bug that probably affects a lot of different stuff. Thoughts and prayers to the Java ecosystem!
R = SB - Hash(R || A || M) A
Where R and S are the two halves of the signature, A is the public key, and M is the message (and B is the curve's base point). If the signature is zero, the equation reduces to Hash(R || A || M)A = 0, which is always false with a legitimate public key.
And indeed, TweetNaCl does not explicitly check that the signature is not zero. It doesn't need to.
There are still ways to be clever and shoot ourselves in the foot. In particular, there's the temptation to convert the Edwards point to Montgomery, perform the scalar multiplication there, then convert back (doubles the code's speed compared to a naive ladder). Unfortunately, doing that introduces edge cases that weren't there before, that cause the point we get back to be invalid. So invalid in fact that adding it to another point gives us zero half the time or so, causing the verification to succeed even though it should have failed!
(Pro tip: don't bother with that conversion, variable time double scalarmult https://loup-vaillant.fr/tutorials/fast-scalarmult is even faster.)
A pretty subtle error, though with eerily similar consequences. It looked like a beginner-nuclear-boyscout error, but my only negligence there was messing with maths I only partially understood. (A pretty big no-no, but I have learned my lesson since.)
Now if someone could contact the Whycheproof team and get them to fix their front page so people know they have EdDSA test vectors, that would be great.
https://github.com/google/wycheproof/pull/79 If I had known about those, the whole debacle could have been avoided. Heck, I bet my hat their ECDSA test vectors could have avoided the present Java vulnerability. They need to be advertised better.
some very popular PKI systems (many CA's) are powered by Java and BouncyCastle ...
It makes ECDSA very brittle, and quite prone to side-channel attacks (since those can get attackers exactly such information.
This doesn’t seem right. Why wouldn’t someone guess a bit 0, see if the recovered message makes sense, and if it doesn’t, then try bit 1?
It would make the entire scheme useless no? Am I missing something?
Here you go:
What's especially great about this is that it's very easy to accidentally have a biased nonce; in most other areas of cryptography, all you care about when generating random parameters is that they be sufficiently (ie, "128 bit security worth") random. But with ECDSA, you need the entire domain of the k value to be random.
and "large amount of signatures" could be "I sign every email I send to a mailing list" or "I use this key to sign some widely distributed software every two weeks"
Yes, the attacks require many signatures. Like the infamous Bleichenbacher RSA attack, which was originally dubbed "The Million Message Attack", in part as a jab at how impractical they were presumed to be, collecting thousands of signatures is often a very realistic attack; for instance, any system that generates signed messages automatically.
E: Thomas beat me to it
Except that, of course, people don't actually do unit testing, they're too busy.
Somebody is probably going to mention fuzz testing. But, if you're "too busy" to even write the unit tests for the software you're about to replace, you aren't going to fuzz test it are you?
But i say that as someone who regularly audits code with almost certainly no unit tests based on the quality of the applications, just one set would do me fine.
These aren't alternatives, they're complementary. I appreciate that fuzz testing makes sense over writing unit tests for weird edge cases, but "these parameters can't be zero" isn't an edge case, it's part of the basic design. Here's an example of what X9.62 says:
> If r’ is not an integer in the interval [1, n-1], then reject the signature.
Let's write a unit test to check say, zero here. Can we also use fuzz testing? Sure, why not. But lines like this ought to scream out for a unit test.
Another example is Poly1305. When you look at the test vectors from RFC 8439, you notice that some are specially crafted to trigger overflows that random tests wouldn't stumble upon.
Thus, I would argue that proper testing requires some domain knowledge. Naive fuzz testing is bloody effective but it's not enough.
Also, would you consider the following to be fuzz testing? https://github.com/LoupVaillant/Monocypher/blob/master/tests...
Certainly, fuzz tests would help us test boundary conditions and more, but they are not a catalogue of known acceptance criteria.
For instance here the keys are going to be around 256 bits in a size, so if your fuzzer is just picking keys at random, your basically never likely to pick zero at random.
With cryptographic primitives you really should be testing all known invalid input parameters for the particular algorithm. A a random fuzzer is not going to know that. Additionally, you should be testing inputs that can cause overflows and are handled correctly ect...
But if you are in a time constrained environment where basic unit tests are skipped fuzz testing will be as well.
Sounds exactly like the kind of disconnected environment that would lead to such bugs.
(That is to say: a Critical Patch Update or a Patch Set Update. Did they really have to overload these TLAs?)
In terms of OpenJDK 17 (latest LTS), the issue is patched in 17.0.3, which was release ~12h ago. Note that official OpenJDK docker images are still on 17.0.2 as of time of writing.
>This is why the very first check in the ECDSA verification algorithm is to ensure that r and s are both >= 1. Guess which check Java forgot?
We're running some production services on OpenJDK and CentOS and until now there are only two options to be safe: shutdown the services or change the crypto provider to BouncyCastle or something else.
The official OpenJDK project lists the planned release date of 17.0.3 as April 19th, still the latest available GA release is 17.0.2 (https://wiki.openjdk.java.net/display/JDKUpdates/JDK+17u).
Adoptium have a large banner on their website and until now there is not a single patched release of OpenJDK available from them (https://github.com/adoptium/adoptium/issues/140).
There are no patched packages for CentOS, Debian or openSUSE.
The only available version of OpenJDK 17.0.3 I've seen until now seems to be the Archlinux package (https://archlinux.org/packages/extra/x86_64/jdk17-openjdk/). They obviously have their own build.
How can it be that this is not more of an issue? I honestly don't get how the release process of something as widely used as OpenJDK can take more than 2 days to provide binary packages for something already fixed in the code.
This shouldn't be much more effort than letting the CI do its job.
Unfortunately, I assume that a very common case is just using the distribution provided openjdk-package and configuring the system for auto updates. So the main issue here is that a serious number of systems is relying on the patch process of the distribution to fix issues like this and they are still vulnerable at this moment.
As I see it, the distributions are mostly relying on the upstream provisioning of the openJDK project. So if they fix this issue, it shouldn't take long until we see updated packages in all major distributions. This might be a problem specific to the openJDK build process, so a different package source would help in that case.
But as mentioned above, Azul usually doesn't provide out-of-cycle critical fixes without a paid plan. And most people will still use whatever the distribution provides - so this is still an issue regardless of alternative package sources.
And since I assume that many or most running JDK instances actually are coming from the distributions repository rather than an alternative source, and there is literally no outcry regarding the missing packages whatsoever - I fear that there are a lot of vulnerable software systems of people not knowing about it right now.
> The official OpenJDK project lists the planned release date of 17.0.3 as April 19th, still the latest available GA release is 17.0.2
I don't think there 17.0.3 ever will be available from openjdk.java.net; there's no LTS for upstream builds, and since Java 18 is out already, no further builds of 17 should be expected there. IMO, this warrants some clarification on that site though.
These are the official upstream builds by the updates project built by Red Hat. Not to be confused by Red Hat Java, not to be confused by the AdoptOpenJDK/Adoptium builds. These can‘t be hosted on openjdk.java.net because they host only builds done by Oracle, not to be confused by Oracle JDK.
On the other hand, the problem that many popular server distributions like CentOS and Debian still haven't updated their Java 17 packages remains and I wonder if this is due to their own package build process or because they are waiting for an upstream process to complete.
If they actually rely on the upstream builds from openjdk.java.net that would mean that the fix will not make it to their repositories at all.
Is there any truth to this? Doesn't basically all Internet traffic rely on the security of (correctly implemented) asymmetric cryptography?
The TLS encryption is of course assumed here, but that is nothing most developers ever really touch in a way that could break it. And arguably this part falls under the "you absolutely need it" exception.
x509 certificates have several revocation mechanisms since having something being marked as "do not use" before the end of its lifetime is well understood. JWTs are not quite there.
You could compare x509 with revocation to something like oauth with JWT access tokens, though.
In that case, x509 certificates are typically expensive to renew and have lifetimes measured in years. Revocation involves clients checking a revocation service. JWT access tokens are cheap to renew and have lifetimes measured in minutes. Revocation involves denying a refresh token when the access token needs renewing. Clients can also choose to renew access tokens much more frequently if a 'revocation server' experience is desirable.
Given the spotty history of CRLDP reliability, I think oauth+JWT are doing very well in comparison. I'm pretty damn confident that when I revoke an application in Google or similar it will lose access very quickly.
In the Web PKI thanks to Certificate Transparency we can measure, the typical X509 certificate was issued by ISRG (Let's Encrypt) and thus cost well under one dollar (free to the subscriber, that cost is borne by the donors) and has a lifetime of precisely 90 days.
Yes, it's true that in the past few years Let's Encrypt has substantially altered the typical lifetime of web server certificates, as well as substantially eased the burden of refreshing a certificate in what I would guess to be the majority of use cases.
Revocation, however, is still a mess. OCSP services are slow and a privacy leak, and are largely ignored by browsers - in 2021 Firefox was still checking OCSP services but given they're so unreliable if it can't contact a service it assumes the certificate is fine. OCSP winds up being a trade-off between allowing an attacker to conduct a denial of service on all certificates or blocking revocations.
In practice the major browser vendors all do more or less the same thing - build their own proprietary list of revoked certificates and distribute it to browsers
from time to time, with varying sources and granularity on what they will and won't include in their centralised CRLs. I would have little faith in a timely revocation of a compromised server certificate.
Yes, symmetric cryptography is a lot more straightforward and should be preferred where it is easy to use a shared secret.
> Doesn't basically all Internet traffic rely on the security of (correctly implemented) asymmetric cryptography?
It does. This would come under the "unless you absolutely need it" exception.
It's a bad idea (and no one should be doing it) to continue using asymmetric crypto algorithms after that. If someone can get away with a pre-shared (symmetric) key, sometimes/usually even better, depending on the risk profiles.
AES-GCM, you mean. Let's not forget the authentication in "authenticated encryption". I'm nitpicking, but if a beginner comes here it's better to make it clear that in general, encryption alone is not enough. Ciphertext malleability and all that.
AES-GCM has the annoying property of output size > input size for instance.
but as tptaceck pointed out, all authentication methods are going to increase your message size. It's unavoidable: to get authentication you need some redundancy, and the only general way to get that redundancy is to have a message bigger than the plaintext. We do have attempts at length preserving authenticated encryption, but as far as I know they're not as well studied as the classical "encrypt-then-mac" methods such as AES-CBC + HMAC or AES-GCM. https://security.googleblog.com/2019/02/introducing-adiantum...
AES-GCM (as a method) is unusual this way, because it combines encryption and validation at the same time, in each block. They’re two steps - you have the cipher text and the validation data separate.
It’s encrypting + signing everything, essentially, for each block. It stores the data for it directly in each block, which is why the inflation.
For why this is both great, and terrible depending on the use case - for problem cases, imagine full disk encryption. If you naively encrypt the block using AES-GCM, any block you encrypt will no longer fit in the device. If you encrypt a file (like a database file) which relies on offsets or similar hard coded byte wise locations to data, those no longer work.
In both cases you’d need a virtualization layer which would map logical offsets to physical ones. Definitely not impossible. Not as straightforward as replacing your read/write_blk method with read/write_encrypted_blk though.
As for why it’s awesome, it greatly simplifies and strengthens the real world process of encrypting or decrypting data where the size of the input and output are not fixed by some hardware constraint or fixed constant, where you have a virtualization layer, or where you don’t need to care as much (or can remap) offsets. Which is often.
That's because you are not aware of the importance of authentication.
Without authentication, your system is not secure: an attacker might intercept messages, and modify them undetected. The key word here is "ciphertext malleability". And once they can do that, they can cause the recipient to react in ways it should not, and in some cases the recipient might even leak secrets.
Sometimes (like disk encryption) the size overhead is really really really inconvenient, and the risk of interception is lower, so you break the rule and skip it anyway. But unless you are in a similar situation (you probably aren't), you must use authentication. It's only professional.
In practice, that means you should use authenticated encryption. Authenticated encryption is used everywhere, including HTTPS. And yes, it has a small size overhead. Usually 16 bytes per message, like AES-GCM and RFC 8439 (ChaPoly). Per message. Not per block. So the actual overhead is very low in practice. And again, it's the price you have to pay to get a secure system.
Use authenticated encryption.
Accept the overhead like everyone else.
Resistance is futile.
You do not seem to be aware of the practical constraints around an actual attack like ciphertext malleability in this context, or have thought through how you would implement direct disk encryption on a block device with AES-GCM without, you know, doing block based AES-GCM for individual blocks?
Which is exactly what I was referring to?
For block based, the best way is simple to use a validating filesystem like ZFS on top of whatever block based crypto is being used, if you need random IO. If you don't, a simple fixed size signature (seperate from the data) is sufficient, and out of band is fine.
In either case, including AES-GCM, the validation and authentication is not, itself, the symmetric encryption algorithm. They wrap approved block ciphers which do that.
As per the Standard, anyway.
I'm not against AES-GCM, not at all. It's awesome! I'm pointing out that it has implementation tradeoffs.
Show me a peer reviewed paper demonstrating, or at least convincingly arguing, of the soundness of a particular technique you are trying to advocate, and I’ll believe you.
Otherwise it’s pretty simple: either your authentication method has been validated (as are HMAC and polynomial hashes), or there’s a good chance it’s broken even if you don’t know it yet.
> For block based, the best way is simple to use a validating filesystem like ZFS on top of whatever block based crypto is being used,
File system blocks are typically 4KiB or so. AES blocks are 16 bytes. I’m not sure what you mean by "block based crypto" here, the length of AES blocks has nothing to do with the file system blocks you’re trying to encrypt.
> In either case, including AES-GCM, the validation and authentication is not, itself, the symmetric encryption algorithm. They wrap approved block ciphers which do that.
I have implemented a cryptographic library, so I’m well aware. I insist on authenticated encryption as if it was a monolithic block because it makes much safer APIs. You really really don’t want to let your average time pressured programmer to implement their polynomial hash based authentication protocol by hand, there are too many footguns to watch out for. Believe me, I’ve walked that minefield, and blew my leg off once.
> I'm not against AES-GCM, not at all. It's awesome! I'm pointing out that it has implementation tradeoffs.
Compared to what? All the example you cite in your other comment (SSL to PGP/GPG, S/MIME), make the exact same trade-offs!! They all add an authentication tag to each message, effectively expanding it size.
The fact that XTS isn't authenticated is a huge problem with full-disk encryption.
And any decent structural validation of the data still makes it reasonably secure even without per-block validation.
Without the correct key for AES, it is exceedingly difficult to construct a value that can result in a successful attack after decryption even for the simplest file systems (as compared to a very visible crash or disk corruption issue even without validation), and that blog post way oversimplifies the actual process. It also makes numerous flat out false statements about many encryption modes.
a trivial answer that solves every one of the attacks mentioned in that blog is using ZFS on top of a encrypted block device.
In each of these cases, for a successful attack, you’d need to generate a new block, or identify an existing block to replace a known block with, that would produce the attackers desired outcome. All GCM does is make it more detectable in the encrypted data if that happens.
Some modes mentioned, if watching the actual disk activity and doing chosen plaintext attacks, it could be possible to shorten the time to recover the underlying volume keys, but that is not helped immensely by GCM (necessarily).
It is going to be obvious in the system itself without the right key if someone tries to swap in a bogus block, because it will be gibberish/corrupt, if it is data used by anything or checked by anything.
AES-GCM just means you can tell when you pick something up, vs when you look at it if it’s damaged. And it does it at the trade off of adding a signature on everything. Sometimes that’s worth it, sometimes it’s not.
First, name one example.
Second, what do you mean by "individual blocks"?
AES-GCM adds one authentication tag per message. A single message may contain millions of AES blocks, and the total overhead of AES-GCM over it will still be a single authentication tag (16 bytes). That makes it very similar to pretty much any authenticated encryption scheme out there.
I was specifically referring to the context of things like block devices. There is no single message (in a sane way, anyway) for the device. Each low level block is the message, in the sense you are referring to. That's when inputsize != outputsize is a problem, as that 'message' is also fixed size.
When I am referring to authenticating in a way that doesn't make individual blocks bigger, I'm referring to a HMAC signature in filesystem metadata or similar in this type of scenario. Out of band information. Practically speaking, even a basic CRC of metadata and file contents would make most attacks impractical.
Which you could do with AES-GCM of course, by storing the tag separately. I currently know of no implementations that do so however, but I'm sure there are ones out there. It would require storing the tag per block, which doesn't sound fun or performant.
To answer your second question in that context - everything from SSL to PGP/GPG, S/MIME, etc.
If I recall correctly, CRC of plaintext-then-encrypt scheme have been defeated in the past. With practical attacks.
> Which you could do with AES-GCM of course, by storing the tag separately. I currently know of no implementations that do so however, but I'm sure there are ones out there. It would require storing the tag per block, which doesn't sound fun or performant.
Here’s an example from possibly the most famous modern cryptographic library: https://doc.libsodium.org/secret-key_cryptography/aead/aes-2...
As for storing the tag "per block", I’m not sure what you mean. Sure you need one tag per block, but with the above API you can store that tag anywhere you want. If for instance you pack them into dedicated blocks, a single 4KiB blocks can store 256 authentication tags. The loss of storage capacity would be a whooping 0.4%.
> When I am referring to authenticating in a way that doesn't make individual blocks bigger, I'm referring to a HMAC signature in filesystem metadata or similar in this type of scenario. Out of band information
Then just store the authentication tag from AES-GCM out of band!! Surely your meta-data can handle a 0.4% size overhead?
> To answer your second question in that context - everything from SSL to PGP/GPG, S/MIME, etc.
Thought so. They’re all just like AES-GCM. One of them (TLS 1.3, a.k.a. SSL) can even use AES-GCM for its symmetric crypto.
I tried to read your pointer, but the link goes no where explaining it. Mind giving a more useful link? It
Could be because I’m on mobile.
We weren’t talking about CRC of plaintext anyway - we were talking about block encryption. So it would be CRC (as validation) of on-disk filesystem structures as part of parsing. Aka an actual attack.
Standard AES-GCM appends the tag to the encrypted message directly. None of those I name do it that way. Using AES-GCM as a transport is layering their stuff inside it, which of course is fine as I’m describing it - because they don’t have fixed size structures in their protocols! It doesn’t mean they aren’t doing the additional validation and authentication.
That is a shitty problem to have, there is no perfect solution. If you at all can, change the problem. If that means you need a virtualization layer, use it if possible.
> I tried to read your pointer, but the link goes no where explaining it. Mind giving a more useful link? It Could be because I’m on mobile.
The first sentence of the link I gave you reads as follows: "Some applications may need to store the authentication tag and the encrypted message at different locations."
Then it shows you the following function that achieves that separation (with zero performance overhead I might add):
unsigned char *ciphertext,
unsigned char *mac,
unsigned long long *mac_size_p,
const unsigned char *message,
unsigned long long message_size,
const unsigned char *additional_data,
unsigned long long additional_data_size,
const unsigned char *always_NULL,
const unsigned char *nonce,
const unsigned char *key);
Which brings me back to: all secure encryption expands the size of the ciphertext. If you're using XTS in a new design, you are doing something very wrong.
of course it would be more secure to have private physical key exchange, but that's not a practical option, so we rely on RSA or whatever
I wouldn't bet on the TLS session you're using to have that kind of half life.
The other side is the protocol itself. Protocols are delicate, and easy to mess up in catastrophic ways. On the other hand, they're also provable. We can devise security reductions that prove that the only way to break the protocol is to break one of its primitives. Such proofs are even mechanically verified with tools like ProVerif and Tamarin.
Maybe TLS is a tad too complex to have the same half life as AES. The Noise protocols however have much less room for simplification. That simplicity makes them rock solid.
"Immediately ditch RSA in favor of EC, for it is too hard to implement safely!"
This specific one was introduced with the rewriting of these parts of the code from C++ to Java, and that happened with Java 15.
Seems like someone likes to live dangerously: using libraries that haven't been updated since 2012 is a pretty risky move, especially given that if an RCE is discovered now, you'll find yourself without too many options to address it, short of migrating over to the new release (which will be worse than having to patch a single dependency in a backwards compatible manner): https://logging.apache.org/log4j/1.2/changes-report.html
Admittedly, i wrote a blog post called "Never update anything" a while back, even if in a slightly absurdist manner: https://blog.kronis.dev/articles/never-update-anything and personally think that frequent updates are a pain to deal with, but personally i'd only advocate for using stable/infrequently updated pieces of software if they're still supported in one way or another.
You do bring up a nice point about the recent influx of vulnerabilities and problems in the Java ecosystem, which i believe is created by the fact that they're moving ahead at a faster speed and are attempting to introduce new language features to stay relevant and make the language more inviting for more developers.
That said, with how many GitHub outages there have been in the past year and how many other pieces of software/services have broken in a variety of ways, i feel like chasing after a more rapid pace of changes and breaking things in the process is an industry wide problem.
I disagree. Some libraries are just rock solid, well tested and long life.
In the case of log4j 1.x vs 2.x, has there been any real motivator to upgrade? There are 2 well known documented vulnerabilities in 1.x that only apply if you use extensions.
Here's a bit more information about some of the vulnerabilities in 1.x, someone did a nice writeup about it: https://www.petefreitag.com/item/926.cfm
I've also dealt with 1.x having some issues with data loss, for example, https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4... which is unlikely to get fixed:
DailyRollingFileAppender has been observed to exhibit synchronization issues and data loss.
But at the end of the day none of it really matters: those who don't want to upgrade won't do so, potential issues down the road (or even current ones that they're not aware of) be damned. Similarly, others might have unpatched versions of 2.x running somewhere which somehow haven't been targeted by automated attacks (yet) and might continue to do so while there isn't proper motivation to upgrade, or won't do so until it will be too late.
Personally, i dislike the idea of using abandoned software for the most part, when i just want to get things done - i don't have the time to dance around old documentation, dead links, having to figure out workarounds for CVEs versus just using the latest (stable) versions and letting someone else worry about it all down the road. Why take on an additional liability, when most modern tooling and framework integrations (e.g. Spring Boot) will be built around the new stuff anyways? Though thankfully in regards to this particular case slf4j gives you more flexibility, but in general i'd prefer to use supported versions of software.
I say that as someone who actually migrated a bunch of old monolithic Spring (not Boot) apps to something more modern when the versions had been EOL for a few years and there were over a hundred CVEs as indicated by automated dependency/package scanning. It took months to do, because previously nobody actually cared to constantly follow the new releases and thus it was more akin to a rewrite rather than an update - absolute pain, especially that JDK 8 to 11 migration was also tacked on, as was containerizing the app due to environmental inconsistencies growing throughout the years to the point where the app would roll over and die and nobody had any idea why (ahh, the joys of working with monoliths, where even logs, JMX and heap dumps don't help you).
Of course, after untangling that mess, i'd like to suggest that you should not only constantly update packages (think every week, alongside releases; you should also release often) but also keep the surface area of any individual service small enough that they can be easily replaced/rewritten. Anyways, i'm going off on a tangent here about the greater implications of using EOL stuff long term, but those are my opinions and i simultaneously do admit that there are exceptions to that approach and circumstances vary, of course.
Luckily, there's now an alternative: reload4j (https://reload4j.qos.ch/) is a maintained fork of log4j 1.x, so if you were one of the many who stayed on the older log4j 1.x (and there were enough of them that there was sufficient demand for that fork to be created), you can just migrate to that fork (which is AFAIK fully backward compatible).
(And if you do want to migrate away from log4j 1.x, you don't need to migrate to log4j 2.x; you could also migrate to something else like logback.)
This one, I gather, is actually Java's fault.
It sounds like three unrelated security bugs from totally different teams of developers.
Modules are also part of the reason why so many folks got "stuck" on java 8.
It is definitely an interesting study in the challenges of trying to make advances in a platform when a lot of the ecosystem is very much in maintenance mode and may not have a lot of eyes on the combination of existing libraries vs new versions of Java.
At some level, as long as releases add functionality, the basic rules of systemantics will guarantee unintended interactions.