
Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries - pentestercrab
https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/february/downgrade-attack-on-tls-1.3-and-vulnerabilities-in-major-tls-libraries/
======
axaxs
Just commenting since I saw and recognized the name next to BearSSL... Thomas
Pornin is an absolute treasure, and anyone interested in entry level crypto
and beyond should read through his StackOverflow responses. Many of the
answers simplify complex topics into more digestable pieces.

~~~
Flowdalic
Link to his stackoverflow answers sorted by votes:
[https://stackoverflow.com/users/254279/thomas-
pornin?tab=ans...](https://stackoverflow.com/users/254279/thomas-
pornin?tab=answers&sort=votes)

~~~
esnard
Thomas Pornin actually posted a lot of answers on multiple sites on the
StackExchange network, under two different accounts:

[https://stackexchange.com/users/92852/thomas-
pornin](https://stackexchange.com/users/92852/thomas-pornin)

[https://stackexchange.com/users/969353/tom-
leek](https://stackexchange.com/users/969353/tom-leek)

~~~
axaxs
Thanks for this, StackExchange was actually what I was thinking of when I
wrote StackOverflow. Luckily, he also posts there a lot as well. I think my
favorite answer is as below, as it's the first time I could recall a
simplified answer of how TLS works, while retaining technical detail. A gift
for sure...

[https://security.stackexchange.com/questions/20803/how-
does-...](https://security.stackexchange.com/questions/20803/how-does-ssl-tls-
work/20847#20847)

------
pedrocr
With the amount of TLS vulnerabilities I don't really understand why we're not
just replacing it completely. From what I've read a lot of the issues is the
complexity of the standard itself that we could do much better now that so
much more is known about good crypto practice. Google has already pushed HTTP
2 and now 3 thanks to having very sizeable chunks of both the browser and the
sites. Why not also have a much better designed crypto standard in one of
those efforts?

~~~
tialaramex
Sure, this is the standard anti-agility argument. "Oh, we understand
yesterday's mistakes now, so, just throw away everything built before today
and start fresh, then there will be no mistakes". It's like CADT but with
cryptography.

If you have the luxury of greenfield development, you are welcome to try this.
There's a pretty good chance you'll screw up badly, but regardless when
tomorrow more opportunities for mistake are discovered you'll be vulnerable
and won't have a greenfield any more. You get to learn basically the same
lessons about agility everybody else did, the same way everybody else learned
them. Brilliant.

The Web is not a greenfield development. Google may have "pushed HTTP 2" but
you can still connect to their sites using HTTP 1.1 because _of course you
can_. So that first step, where you throw everything that already exists away,
is immediately the end of your whole strategy for the Web or more or less any
public Internet service.

You might be thinking. "OK, old crap stuff would be affected, but my new shiny
things would be fine". And you're almost right. But you have to really operate
a scorched earth policy, the new shiny things _must not_ interoperate with the
old crap at all. And that's a deal breaker in practice on the Internet. If you
say "Well, if we can't do shiny I guess we'll do the old thing" then you lose
immediately, that's the thrust of their TLS 1.3 example, both client and
server want to talk TLS 1.3 which isn't vulnerable - but the attacker abuses
the fact that they're willing to talk TLS 1.2 instead.

If you don't want to do RSA kex in your own system where you control all
servers and clients, don't do RSA kex. I commend this, it's good sense. You
can use the exact OpenSSL version described as vulnerable in this article,
switch off RSA key exchange entirely at both ends and the vulnerability
vanishes. But alas even "almighty" Google does not control all servers and
clients on the Web.

~~~
pedrocr
_> It's like CADT but with cryptography._

TLS is 25 years old now. CADT is not a fair criticism when more than the
lifetime of a whole teenager has elapsed. At some point building a second
system is worth it.

 _> You get to learn basically the same lessons about agility everybody else
did, the same way everybody else learned them. Brilliant._

We would also get to throw away a bunch of stuff we already know was a very
bad idea. There has to be a limit somewhere to that tradeoff. At some point
starting fresh allows you to throw away enough crap that you come out ahead.

 _> If you say "Well, if we can't do shiny I guess we'll do the old thing"
then you lose immediately, that's the thrust of their TLS 1.3 example, both
client and server want to talk TLS 1.3 which isn't vulnerable - but the
attacker abuses the fact that they're willing to talk TLS 1.2 instead._

There is no difference there between changing from 1.2 to 1.3 and from 1.2 to
ShinyNewStuff. My point is that from what I've read we've learned enough that
we could design something better than TLS if we started fresh, so we could do
TLS1.2->ShinyNewStuff instead of TLS1.2->TLS1.3 and get some advantages. The
downgrade attacks are present in both cases until you eventually discontinue
1.2.

 _> If you don't want to do RSA kex in your own system where you control all
servers and clients, don't do RSA kex._

One of the things we've learned over the last 25 years is that having
optionally insecure ways to use security standards is a very bad idea. So the
fact that configuring TLS well enough is feasible is part of the problem, not
the solution. That's the kind of thing we could potentially fix.

~~~
tialaramex
> TLS is 25 years old now.

The on-the-wire protocol has some faint resemblance but the actual technology
between SSL 2.0 and TLS 1.3 are utterly different.

The original Bell phone system where you have to talk to an operator and say
"Give me 4235 please", and a modern iPhone, are also utterly different - but
every step along the way was achieved by backwards compatibility and that
meant some compromises. So the iPhone still has "phone numbers" even though
you probably rarely use them for anything.

That's all that was going on in TLS. The "start fresh" you were originally
asking for involves _throwing away_ that compatibility to get "some
advantages". I explained why TLS doesn't do that. The feeling that surely
starting over would help is _exactly_ CADT even if you're not a teenager.

You absolutely can have those "some advantages" if you don't want backwards
compatibility. But it's not obvious why you'd rewrite TLS rather than just use
TLS in this case and refuse to downgrade below TLS 1.3. What happens when TLS
1.4 has even more "advantages" ?

~~~
pedrocr
_> but every step along the way was achieved by backwards compatibility and
that meant some compromises_

This is not a good analogy. The phone system has thrown away plenty of
standards as well. Throwing away TLS for something else doesn't have any
compatibility problems on the web. There is no difference in compatibility
between doing 1.2->ShinyNew and doing 1.2->1.3.

Maybe we can do enough within just TLS versions to fix the flaws. Maybe doing
1.4 with a much stricter set of conditions is possible and so we should keep
the base. But for some reason that keeps not happening and insecure options
like these still exist. But the compatibility argument doesn't exist. The end
user would not notice anything.

~~~
tialaramex
> Throwing away TLS for something else doesn't have any compatibility problems
> on the web

No. Even just small changes to TLS resulted in massive compatibility mishaps.
There was about a year delay in the TLS 1.3 process while they worked around
things like this.

> But for some reason that keeps not happening and insecure options like these
> still exist.

Unfortunately you don't even understand enough about this topic to have
noticed that the article is specifically mentioning a downgrade because the
"insecure options like these" do not exist in TLS 1.3.

Insisting that problems you don't understand will go away if we just rewrite
everything again from scratch is even closer to CADT than the anti-agility
enthusiasts.

~~~
pedrocr
_> Even just small changes to TLS resulted in massive compatibility mishaps.
There was about a year delay in the TLS 1.3 process while they worked around
things like this._

Those are part of the problem we should be solving. Delaying TLS 1.3 to enable
middleboxes is precisely the kinds of things we should be throwing away from
any standard. That TLS 1.3 was close enough to 1.2 for broken middleboxes to
show how broken they are is an example of how backwards compatibility at all
cost bites you, and yet you are using it to claim the opposite.

It's perfectly possible that all that needs to be done for TLS to stop being a
continuous problem is a sane TLS 1.4. But I'd like to see an actual argument
for what that would look like instead of you just insulting me without adding
anything to the discussion.

~~~
tialaramex
> Those are part of the problem we should be solving. Delaying TLS 1.3 to
> enable middleboxes is precisely the kinds of things we should be throwing
> away from any standard.

Your "solution" of just breaking things while declaring this "doesn't have any
compatibility problems" is not a solution people are going to accept. It
genuinely doesn't matter that you think it'd be a great idea, not the tiniest
bit, since it would see no adoption.

TLS 1.3 did not take this "backwards compatibility at all cost" approach you
describe. On the contrary, it was engineered very carefully to work _around_
middleboxes. The protocol is untidy as a result, with extraneous compatibility
fields, but the cryptography remains as intended. But you've not addressed
that at all since your entire basis is "From what I've read a lot of the
issues is the complexity of the standard itself".

Again, this issue you're reading about today is NOT a problem with TLS 1.3,
and would not have been fixed by any changes to TLS 1.3 or imaginary
alternatives to TLS 1.3 _unless_ as a side effect they prohibited falling back
to earlier versions, something you can already choose to do if that's what you
want (you do not).

~~~
pedrocr
_> It genuinely doesn't matter that you think it'd be a great idea, not the
tiniest bit, since it would see no adoption._

The discussion around TLS 1.3 delays included plenty of people that also
thought middleboxes should not be enabled. If a standard that broke
compatibility with them completely would be adopted is an open question.

 _> TLS 1.3 did not take this "backwards compatibility at all cost" approach
you describe. On the contrary, it was engineered very carefully to work
_around_ middleboxes. The protocol is untidy as a result, with extraneous
compatibility fields, but the cryptography remains as intended._

The second part contradicts the first. The standard has had to be made more
complex to allow for the backwards compatibility and that's a future
liability. We've had other holes in TLS from these extra complexities that are
completely unneeded.

 _> Again, this issue you're reading about today is NOT a problem with TLS
1.3_

Again that's not my point at all. I'm not discussing today's issue in
particular. I'm referring to the fact that we keep finding bugs in TLS
consistently. I was wondering if we were at the point were a clean slate would
reduce this. I've made that point, someone else has pointed out to an
experiment in doing just that, maybe someday we'll get a full test. You're
dismissal of the actual question as if it's invalid is just being unpleasant
without being helpful.

------
Someone1234
> The cat is not dead yet, with two lives remaining thanks to BearSSL
> (developed by my colleague Thomas Pornin) and Google's BoringSSL.

Some kind of award has to go to this sentence, that has to be the most
convoluted way to simply say "aren't vulnerable."

In context you can only just barely follow it, and it literally involves
counting the vulnerable + un-vulnerable libraries to check they all add up to
9...

~~~
pizzazzaro
Yeah... What about LibreSSL?

~~~
george_perez
LibreSSL has no TLS 1.3 support yet.

------
inetknght
> _The last 20 years of attacks that have been re-discovering Bleichenbacher
> 's seminal work in 1998 clearly show that it is close to impossible to
> correclty implement the RSA PKCS#1 v1.5 encryption scheme. While our paper
> recommends a series of mitigations, it is time for RSA PKCS#1 v1.5 to be
> deprecated and replaced by more modern schemes like OAEP and ECEIS for
> asymmetric encryption or Elliptic Curve Diffie-Hellman for key exchanges._

RSA PKCS#1 v1.5:
[https://tools.ietf.org/html/rfc2313](https://tools.ietf.org/html/rfc2313)

Title: _PKCS #1: RSA Encryption version 1.5_

tl;dr: deprecate RSA encryption as a whole?! Did I read this right?

~~~
throwawaymath
The consensus among cryptographers for quite a while now has been that RSA
should be avoided. Implementation vulnerabilities in RSA aren't surprising,
and it's a poor choice of algorithm for modern cryptosystems.

However, note that much of the problem with implementing RSA correctly is the
padding. The specific recommendation here is to only use RSA OAEP, and
preferably to abandon RSA altogether for more modern (elliptic curve)
constructions.

So no, they're not saying to deprecate RSA in its entirety (though I have high
confidence all of the authors would strongly suggest that to anyone who
asked). Rather, they're saying you should only use RSA with one very specific
form of padding, if you absolutely insist on using RSA in 2019 (and you
shouldn't unless you know you have to).

------
fulafel
There's a big disparity between the level of ambition in transport security
implementations, and the big recent archievements in getting crypto more
widely deployed...

I think the current standard should be memory-safe implementations with proven
robustness against known classes of attacks, and optional resistance against
traffic analysis (at expense of wasted bandwidth).

------
hombre_fatal
Jesus. Imagine being enough of a genius to actually write timing-safe code.

~~~
AstralStorm
Why? You should be able to mathematically prove lack of timing channels
instead over whole negotiation... (For a specific CPU implementation or a set
of them at least.) It's just that even encryption library authors are not
mathy enough and it takes effort to model CPUs enough.

PKCS#1 RSA is likely possible to be proven broken by design...

~~~
hombre_fatal
> It's just that even encryption library authors are not mathy enough

I rest my case.

~~~
AstralStorm
They can implement an algorithm. It is not the same as writing tons of pages
of a machine code level automated theorem prover.

Except someone wrote a library for timing proofs (including cache and memory)
in Isabelle/HOL already, but it has to be combined with the recompile prover
from SeL4 project. That would take some time and work.

------
kccqzy
Why is using RSA for key exchanges acceptable? I thought we had dedicated key
exchange algorithms, Diffie–Hellman (DH), the ephemeral variant (DHE), and the
elliptic curve variant (ECDHE). So why RSA?

Also, what if I disable RSA in my browser and make sure the ClientHello
doesn't mention RSA? Will I be secure?

~~~
tialaramex
Your browser probably doesn't have an option to do this. If it did you'd find
out that certain sites just don't work if you forbid RSA key exchange.

The type of site that wants SSL Labs A+ scores works fine. But your bank
probably doesn't (they actively don't want ephemeral key exchange) and nor
does some crumbly older HTTPS site running a stitched together Apache 1.x on
an old Debian release.

To protect against this attack the server needs to refuse to try RSA key
exchange OR you need to refuse RSA altogether including the safe and extremely
popular authentication step.

~~~
_null_
Serious question: Why does the bank care about the TLS key exchange?

~~~
dharmab
Some businesses have to run WAF products for regulatory compliance, which is
typically implemented via TLS decryption at a WAF. There are ways to do TLS
decryption with ephemeral keys but many orgs just use the easy way of just
giving the WAF the RSA key.

------
wilsonthewhale
Is LibreSSL affected?

~~~
throwawaymath
Good question. The paper is here[1] but there is no mention of LibreSSL.

1: [https://eprint.iacr.org/2018/1173](https://eprint.iacr.org/2018/1173)

~~~
0xdeadb00f
Strange. LibreSSL a fairly well-known implementation compared to something
like BearSSL (which I had not heard of until today). Does anyone have any
ideas on why LibreSSL was not mentioned?

~~~
loeg
It's not really independent. It's a major fork of OpenSSL, but at the end of
the day, it's a fork of OpenSSL.

------
devit
How about the obvious solution to all timing attacks of sending a network
response after a timer set to 2^K milliseconds expires rather than when the
data is ready? (with K adaptively incremented and decremented only rarely,
outliers managed by starting the timer again so that the time is rounded up to
2^K)

This way, the only timing signal available would be which requests take an
outlier amount of time, and I doubt that's enough to break anything unless you
can remotely cause the peer to hit slow disks or make network requests
depending on secret data (which is a far more explicit programming choice than
CPU timing differences).

~~~
tialaramex
The article is about cache side channels. They are running on the same CPU as
the victim, typically in the Clown. Your "solution" isn't addressing the
problem this article is about.

------
badrabbit
Maybe a one size fits all transport security protocol is a bad idea. Simpler
and smaller protocols with less features might provider better stable
security?

------
walrus01
Downgrading to TLS1.2 isn't the end of the world.

One of the things you can do to make a significant difference is configure all
of your httpd (apache2, nginx, whatever) to specifically disallow SSLv3,
TLS1.0 and TLS1.1.

There is no longer any relevant population of useragents that don't understand
TLS1.2.

~~~
arto
Seconded, and sourced:

> Microsoft cited public stats from SSL Labs showing that 94 percent of the
> Internet's sites have already moved to using TLS 1.2, leaving very few sites
> on the older standard versions.

> "Less than one percent of daily connections in Microsoft Edge are using TLS
> 1.0 or 1.1," Pflug said, also citing internal stats.

[https://www.zdnet.com/article/chrome-edge-ie-firefox-and-
saf...](https://www.zdnet.com/article/chrome-edge-ie-firefox-and-safari-to-
disable-tls-1-0-and-tls-1-1-in-2020/)

~~~
rutthenut
Well given the amount of Internet traffic out there, I'd say that one percent
(albeit 'less than' that) could be rather a lot of traffic that would be
blocked if TLS 1.0 or 1.1 were totally dropped or blocked.

~~~
pedrocr
The suggestion was to block it at the server not the browser. Presumably most
of that 1% is because of old sites/servers and not old browsers.

------
baybal2
Now, who was a TLS 1.3 committee member who pushed for removing "hard
versioning" from server hello?

------
merb
am I vulnerable if I only use TLS 1.2?

~~~
tialaramex
Yes. The only way to be invulnerable to this class of attack is of one of:

1\. You never use RSA at all (the attack needs a server to be willing to do
RSA decryption, but clients only need to be willing to do RSA for certificate
verification)

2\. Everything is "on premises". This is a cache timing attack and probably
won't be practical even a short distance away over a network.

3\. Server doesn't allow any version below TLS 1.3

~~~
merb
Actually what I mean is that my server only responds with TLS 1.2, nothing
below or above.

So I am still vulnerable to 1. and 2.?

~~~
tialaramex
Yes. Despite the HN title the article topic isn't really "We found a new
problem in TLS 1.3" it's closer to "Bleichenbacher Oracles still exist in lots
of TLS implementations, although in two of the nine we checked we couldn't
find an Oracle".

They only mention TLS 1.3 because otherwise uninformed people would say "Just
upgrade to TLS 1.3" which won't fix the problem.

~~~
merb
Thanks. Still confusing title, guess best Would be tls 1.2,1.3

------
java-man
Is BouncyCastle affected?

------
bitmadness
What about LibreSSL?

