I say this as a Java developer who thinks Java is on the whole pretty decent: Java is way behind browser vendors when it comes to adding new ciphers and dropping broken ciphers.
Between the infrequent releases, the slavish devotion to maintaining backwards-compatibility, the sluggish release distribution, and the general conservative nature of some Java shops when it comes to upgrading, it's just not reasonable to expect the average non-java-using organisation to wait for Java before dropping broken crypto.
If you're running the latest LTS release of Ubuntu you'll get Firefox 37.0.2, released May 2015. But you'll still get Java 7, despite the fact Java 8 was released in March 2014. Giving you cutting-edge TLS1.0 and CBC ciphers . And least Java 7 looks good compared to Java 6  which has no SNI support, a stack of weak ciphers, and only supports the most obscure ciphers. And I know some people who are still running Java 6 in production systems.
If you wait for every last Java 6 holdout before deploying a secure configuration, you'll be waiting forever. Leave us behind, we'll only slow you down .
I'd say best practices for Java shops should be to move the SSL termination to a proxy in front of the app server. This doesn't work if you're doing mutual TLS to authenticate users in your Java stack of course, and I'm sure there's a bunch of other use cases where you can't, and it would of course be ideal if Java could keep up with security. But given the state of the Java world, just drop in an TLS termination proxy wherever possible (nginx works great) and forget about doing it in Java.
What people are really saying here is that the native APIs provided the standard Java packages are behind. Java, as a platform, has no inherent limitations like this. Feel free to check out the excellent work of the team a Bouncy Castle . They offer full ECDHE support in a really well maintained library. Sure, Oracle should build this in natively (and I admit that it does look bad when you see it on SSL Labs), but there are other options.
With both AES-GCM and ChaCha20-Poly1305, confidentiality is provided by XORing the plaintext with a keystream generated by either AES or ChaCha20. If the nonce is the same, then the same keystream is used.
Consider two plaintexts, p₁ and p₂, encrypted with the same (key, nonce) pair. The ciphertexts will, in part, contain p₁⊕k and p₂⊕k, where k is the keystream and ⊕ is XOR.
An attacker can XOR those ciphertexts together and get p₁⊕k⊕p₂⊕k = p₁⊕p₂⊕k⊕k = p₁⊕p₂. If the attacker has any knowledge of p₁ or p₂ then the confidentiality of the other falls as well.
The failure of the authenticator is more complex. Both AES-GCM and ChaCha20-Poly1305 use polynomial authenticators and, in short, duplicating a (key, nonce) pair allows the attacker to solve an equation and that's very bad.
Any authentication tag is a detail of the AEAD. As a practical matter, in order to provide authentication, the AEAD must expand the plaintext and, in some AEADs, that expansion comes in the form of a tag. But some AEADs just pad the plaintext with zeros and use a variable-block cipher (i.e. AEZ), in which case there's no tag as such.
Either way, it's an internal detail of the AEAD that someone using it doesn't need to know about. The AEAD just needs to signal an error at decryption time if the ciphertext has been manipulated.
The associated data(⁺) is just an input that needs to be equal at encryption and decryption time. It can be empty, or it could be a counter, but it could also be some other form of context, e.g. a string “payload for attachment #3 of message #234982374”. It's there to make sure that ciphertexts are understood in the correct context, but it's not included in the ciphertext itself.
The Java docs (and Java's crypto APIs are terrible in general I'm afraid) call the AD both “associated data” and “additional authentication data”. That's just a mistake. At the very least they should be consistent within themselves and I think they should pick “associated data” as the term to use.
(⁺) I called it “additional” data in my post, but since the RFC calls it “associated” I changed it to that. It's the same thing.
CRLSet 2140 contains a public-key block for this. If you're on desktop Chrome you can go to chrome://components and check for a CRLSet update manually. If you're testing it, note that Chrome caches certificate validity results for a while so you might need to restart Chrome to see the effect.
ECC private keys are just random numbers. The reported issue is that, if the random number happens to be encodable in fewer bytes than expected, the spec says that it should be padded with leading zeros, but OpenSSL doesn't do that.
For example, if you generate 32-bit random numbers, you expect a few to be only three bytes long (and even a few to be two or one). The difference is whether you write 0x123456 or 0x00123456. There's no security impact. At worst, an OpenSSL-generated ECC key might be rejected by other code.
Since OpenSSL has been doing this forever (based on the report) in practice this means that we should update the spec :)
At worst, an OpenSSL-generated ECC key might be rejected by other code.
Private keys are almost certainly kept private so the amount of software that handles them is relatively limited (I'd bet that the majority of the time it's OpenSSL), but how about public ECC keys? As I understand it, they could be embedded in certificates and signed, in which case a signature verifier that uses the "correct" encoding might fail. However, AFAIK almost all SSL certificates out there use RSA and ECC is pretty rare, so this problem has little impact.
Ha, you're the guy behind Pond (hi!). As a security researcher, how does it feel to work for a company that (reportedly) (pro)actively collaborates with the NSA? Are you ever worried that the company might not be as ethical as it seems to the average Googler?
puts on tinfoil hat
edit: Thank you for the downvote[s]!
edit2: I just remembered a relevant example. Reading "How Google works", I clicked in many ways with their vision about smart creatives and how to run a company properly. However I then immediately realised that it's written by the same a-hole involved in the massive Google-Apple wage fixing scandal , and it made me question how much of what's in there is real.
You'll get a "feel" for which queries are more suitably answered by Google than DDG quickly enough--which is not all that often as you expect, not because DDG's own results are so incredible, but because as you get the hang of the other !bang operators, you'll find you search directly the very sites that you wished/expected your top Google hits to be in the first place (!w Wikipedia, !so Stackoverflow, !r Reddit, !snopes, !gi !yi !bi image search engines, !map Google maps, !yt Youtube, !wnl !wde !wxx Wikipedia country-code xx, to just name a few I use all the time).
Unfortunately, many CAs decided to ignore it, presumably on the assumption that Microsoft would be forced to back down. We've done this dance with MD5 and 1024-bit certificates and we know how it goes. Here's a quick list of CAs that issued more than 2000 certificates extending into 2017 with SHA-1:
We would all have liked CAs to have acted either when the Baseline was updated (2011) or when Microsoft laid down dates (Nov 2013) or when Chrome talked about doing this at the CA/B Forum meeting earlier this year. It is unfortunate that that 2016/2017 dates are being ignored.
If you run a site and want to be insulated from this sort you might want to consider getting one year certificates. CAs like to sell multiple years of course but doing renewal once every three (or more) years means that you have a significant risk of loosing the institutional knowledge of how to do it. (E.g. the renewal remainder email goes to someone who left last year and you then have a panic when it expires). Additionally, very long lived certificates are not insulated from from these sorts of changes and you may need to replace them during their lifetime anyway.
The claim that CAs have been sitting on SHA-1 and not migrating to SHA-2 is not entirely accurate, at least in my experience with DigiCert.
Consequently, people I know there have told me that 25% of all SHA-2 certs expiring in 2017 have been issued by DigiCert, well beyond their market share. DigiCert has migrated all but a couple hundred customer certificates expiring in 2017 onto SHA-2, and those should be moved soon.
As for CAs in general, much of the blame lies not with CAs but with the lack of SHA-2 compatibility in certain devices and software.
For its part, today, DigiCert released a new, free tool that makes it easy for sys admins to identify all SHA-1 certs in their networks, determine validity periods and how future Chrome releases will treat these certs, and help admins map out a path toward SHA-1 sunsetting and SHA-2 migration.
DigiCert will also replace any SHA-1 certs – for current customers and non-customers alike – for free. They will match the existing SHA-1 term for a free upgrade to SHA-2 through the end of the licensing period. Here’s a link from a Dark Reading article:
Wondering how many of those certs from GlobalSign, GoDaddy, GeoTrust, etc. are 4 & 5 year certs purchased prior to any announcement? As you noted CA's like to push multi-year certs.
While you can usually reissue/re-key your cert free of charge with CA's, a lot of companies are probably hesitant to make sudden moves to SHA2 when there are compatibility concerns. Many on legacy systems like Server 2003 cannot update to SHA2. As I mentioned in another comment the hotfixes only bring Server 2003 SHA2 support up to the same level as XP SP3. (Only compatible as a client, not as a server).
Also Microsoft's fastest approaching SHA2 deadline is January 2016 for CodeSigning yet Windows Vista & 7 don't support SHA2 signatures on kernel drivers. Not sure if that's been patched yet, but it would seem Microsoft isn't fully prepared to support their own policies either at the time of their own announcement.