If you have the luxury of greenfield development, you are welcome to try this. There's a pretty good chance you'll screw up badly, but regardless when tomorrow more opportunities for mistake are discovered you'll be vulnerable and won't have a greenfield any more. You get to learn basically the same lessons about agility everybody else did, the same way everybody else learned them. Brilliant.
The Web is not a greenfield development. Google may have "pushed HTTP 2" but you can still connect to their sites using HTTP 1.1 because _of course you can_. So that first step, where you throw everything that already exists away, is immediately the end of your whole strategy for the Web or more or less any public Internet service.
You might be thinking. "OK, old crap stuff would be affected, but my new shiny things would be fine". And you're almost right. But you have to really operate a scorched earth policy, the new shiny things _must not_ interoperate with the old crap at all. And that's a deal breaker in practice on the Internet. If you say "Well, if we can't do shiny I guess we'll do the old thing" then you lose immediately, that's the thrust of their TLS 1.3 example, both client and server want to talk TLS 1.3 which isn't vulnerable - but the attacker abuses the fact that they're willing to talk TLS 1.2 instead.
If you don't want to do RSA kex in your own system where you control all servers and clients, don't do RSA kex. I commend this, it's good sense. You can use the exact OpenSSL version described as vulnerable in this article, switch off RSA key exchange entirely at both ends and the vulnerability vanishes. But alas even "almighty" Google does not control all servers and clients on the Web.
TLS is 25 years old now. CADT is not a fair criticism when more than the lifetime of a whole teenager has elapsed. At some point building a second system is worth it.
>You get to learn basically the same lessons about agility everybody else did, the same way everybody else learned them. Brilliant.
We would also get to throw away a bunch of stuff we already know was a very bad idea. There has to be a limit somewhere to that tradeoff. At some point starting fresh allows you to throw away enough crap that you come out ahead.
> If you say "Well, if we can't do shiny I guess we'll do the old thing" then you lose immediately, that's the thrust of their TLS 1.3 example, both client and server want to talk TLS 1.3 which isn't vulnerable - but the attacker abuses the fact that they're willing to talk TLS 1.2 instead.
There is no difference there between changing from 1.2 to 1.3 and from 1.2 to ShinyNewStuff. My point is that from what I've read we've learned enough that we could design something better than TLS if we started fresh, so we could do TLS1.2->ShinyNewStuff instead of TLS1.2->TLS1.3 and get some advantages. The downgrade attacks are present in both cases until you eventually discontinue 1.2.
>If you don't want to do RSA kex in your own system where you control all servers and clients, don't do RSA kex.
One of the things we've learned over the last 25 years is that having optionally insecure ways to use security standards is a very bad idea. So the fact that configuring TLS well enough is feasible is part of the problem, not the solution. That's the kind of thing we could potentially fix.
The on-the-wire protocol has some faint resemblance but the actual technology between SSL 2.0 and TLS 1.3 are utterly different.
The original Bell phone system where you have to talk to an operator and say "Give me 4235 please", and a modern iPhone, are also utterly different - but every step along the way was achieved by backwards compatibility and that meant some compromises. So the iPhone still has "phone numbers" even though you probably rarely use them for anything.
That's all that was going on in TLS. The "start fresh" you were originally asking for involves _throwing away_ that compatibility to get "some advantages". I explained why TLS doesn't do that. The feeling that surely starting over would help is _exactly_ CADT even if you're not a teenager.
You absolutely can have those "some advantages" if you don't want backwards compatibility. But it's not obvious why you'd rewrite TLS rather than just use TLS in this case and refuse to downgrade below TLS 1.3. What happens when TLS 1.4 has even more "advantages" ?
This is not a good analogy. The phone system has thrown away plenty of standards as well. Throwing away TLS for something else doesn't have any compatibility problems on the web. There is no difference in compatibility between doing 1.2->ShinyNew and doing 1.2->1.3.
Maybe we can do enough within just TLS versions to fix the flaws. Maybe doing 1.4 with a much stricter set of conditions is possible and so we should keep the base. But for some reason that keeps not happening and insecure options like these still exist. But the compatibility argument doesn't exist. The end user would not notice anything.
No. Even just small changes to TLS resulted in massive compatibility mishaps. There was about a year delay in the TLS 1.3 process while they worked around things like this.
> But for some reason that keeps not happening and insecure options like these still exist.
Unfortunately you don't even understand enough about this topic to have noticed that the article is specifically mentioning a downgrade because the "insecure options like these" do not exist in TLS 1.3.
Insisting that problems you don't understand will go away if we just rewrite everything again from scratch is even closer to CADT than the anti-agility enthusiasts.
Those are part of the problem we should be solving. Delaying TLS 1.3 to enable middleboxes is precisely the kinds of things we should be throwing away from any standard. That TLS 1.3 was close enough to 1.2 for broken middleboxes to show how broken they are is an example of how backwards compatibility at all cost bites you, and yet you are using it to claim the opposite.
It's perfectly possible that all that needs to be done for TLS to stop being a continuous problem is a sane TLS 1.4. But I'd like to see an actual argument for what that would look like instead of you just insulting me without adding anything to the discussion.
Your "solution" of just breaking things while declaring this "doesn't have any compatibility problems" is not a solution people are going to accept. It genuinely doesn't matter that you think it'd be a great idea, not the tiniest bit, since it would see no adoption.
TLS 1.3 did not take this "backwards compatibility at all cost" approach you describe. On the contrary, it was engineered very carefully to work _around_ middleboxes. The protocol is untidy as a result, with extraneous compatibility fields, but the cryptography remains as intended. But you've not addressed that at all since your entire basis is "From what I've read a lot of the issues is the complexity of the standard itself".
Again, this issue you're reading about today is NOT a problem with TLS 1.3, and would not have been fixed by any changes to TLS 1.3 or imaginary alternatives to TLS 1.3 _unless_ as a side effect they prohibited falling back to earlier versions, something you can already choose to do if that's what you want (you do not).
The discussion around TLS 1.3 delays included plenty of people that also thought middleboxes should not be enabled. If a standard that broke compatibility with them completely would be adopted is an open question.
>TLS 1.3 did not take this "backwards compatibility at all cost" approach you describe. On the contrary, it was engineered very carefully to work _around_ middleboxes. The protocol is untidy as a result, with extraneous compatibility fields, but the cryptography remains as intended.
The second part contradicts the first. The standard has had to be made more complex to allow for the backwards compatibility and that's a future liability. We've had other holes in TLS from these extra complexities that are completely unneeded.
>Again, this issue you're reading about today is NOT a problem with TLS 1.3
Again that's not my point at all. I'm not discussing today's issue in particular. I'm referring to the fact that we keep finding bugs in TLS consistently. I was wondering if we were at the point were a clean slate would reduce this. I've made that point, someone else has pointed out to an experiment in doing just that, maybe someday we'll get a full test. You're dismissal of the actual question as if it's invalid is just being unpleasant without being helpful.
A middle ground might be a modern stripped down TLS stack and something similar to HSTS to externally flag that a given site does not accept downgrades.
Some kind of award has to go to this sentence, that has to be the most convoluted way to simply say "aren't vulnerable."
In context you can only just barely follow it, and it literally involves counting the vulnerable + un-vulnerable libraries to check they all add up to 9...
RSA PKCS#1 v1.5: https://tools.ietf.org/html/rfc2313
Title: PKCS #1: RSA Encryption version 1.5
tl;dr: deprecate RSA encryption as a whole?! Did I read this right?
However, note that much of the problem with implementing RSA correctly is the padding. The specific recommendation here is to only use RSA OAEP, and preferably to abandon RSA altogether for more modern (elliptic curve) constructions.
So no, they're not saying to deprecate RSA in its entirety (though I have high confidence all of the authors would strongly suggest that to anyone who asked). Rather, they're saying you should only use RSA with one very specific form of padding, if you absolutely insist on using RSA in 2019 (and you shouldn't unless you know you have to).
They say the only way to use RSA is in a very very specific way that is probably not the default in many implementations. If you need to be careful when doing crypto you will do something wrong. It's just better for everyone if you just forget about RSA and switch to something that is both highly secure by default and hard to mess up in the implementation.
For DH, RSA is still used purely for signatures. RSA is not used to hide any data.
Where is exceeds is its original use in PGP/GPG as no complicated key exchanges are taking place
I think the current standard should be memory-safe implementations with proven robustness against known classes of attacks, and optional resistance against traffic analysis (at expense of wasted bandwidth).
PKCS#1 RSA is likely possible to be proven broken by design...
I rest my case.
Except someone wrote a library for timing proofs (including cache and memory) in Isabelle/HOL already, but it has to be combined with the recompile prover from SeL4 project. That would take some time and work.
Or work on the ARM Cortex M5 verification then run it on AMD-SP which is one. Presuming they allow you to run your own code.
Likewise Intel ME... But that processor is some weird architecture.
Or other external hardware - smart card or the fancy U2F.
Of course the proof can be easily extended to handle other RISC-V in order cores. Proving out of order execution is hairier, a lot of work to be done.
Oh, and you get to freeze the code to given validated compilation FIPS style. Any random C compiler revision can mess you up. Or rerun the prover.
Also, what if I disable RSA in my browser and make sure the ClientHello doesn't mention RSA? Will I be secure?
The type of site that wants SSL Labs A+ scores works fine. But your bank probably doesn't (they actively don't want ephemeral key exchange) and nor does some crumbly older HTTPS site running a stitched together Apache 1.x on an old Debian release.
To protect against this attack the server needs to refuse to try RSA key exchange OR you need to refuse RSA altogether including the safe and extremely popular authentication step.
Nevertheless, it is likely that LibreSSL has not replaced yet this part of complex code inherited from OpenSSL. In that case, LibreSSL would also be vulnerable.
We will support 1.3 once the standard is firmed up and finalized (i.e. ceases to be coopted by vendors making changes to allow for people to continue to run moribund middle boxes that can't recognize a new protocol on the wire) Since there is effectively nothing wrong with TLS 1.2 with a sanely chosen cipher suite today, we believe a clean careful implementation is more beneficial than early adoption.
So there's no rush on the part of the developers to support it. Skimming the commits on that git mirror we can see 1.3 related commits started hitting late last year.
This way, the only timing signal available would be which requests take an outlier amount of time, and I doubt that's enough to break anything unless you can remotely cause the peer to hit slow disks or make network requests depending on secret data (which is a far more explicit programming choice than CPU timing differences).
One of the things you can do to make a significant difference is configure all of your httpd (apache2, nginx, whatever) to specifically disallow SSLv3, TLS1.0 and TLS1.1.
There is no longer any relevant population of useragents that don't understand TLS1.2.
The article itself is actually only doing TLS 1.3 downgrade in passing, and probably only because if they don't some people will say TLS 1.3 fixes this magically (it couldn't).
It's a Bleichenbacher Oracle, again. So the effect is you get the server to do RSA operations for you. You don't learn their private key, but the server uses it and you eavesdrop on the process.
A TLS 1.3 ONLY server isn't vulnerable (no Oracle) and a client isn't vulnerable even in TLS 1.2 if it refuses to use RSA completely. But unlike your suggestion to disable much older versions, those options aren't very practical today.
> Microsoft cited public stats from SSL Labs showing that 94 percent of the Internet's sites have already moved to using TLS 1.2, leaving very few sites on the older standard versions.
> "Less than one percent of daily connections in Microsoft Edge are using TLS 1.0 or 1.1," Pflug said, also citing internal stats.
1. You never use RSA at all (the attack needs a server to be willing to do RSA decryption, but clients only need to be willing to do RSA for certificate verification)
2. Everything is "on premises". This is a cache timing attack and probably won't be practical even a short distance away over a network.
3. Server doesn't allow any version below TLS 1.3
3. Almost all known servers accept TLS 1.2, and definitely banks.
So I am still vulnerable to 1. and 2.?
They only mention TLS 1.3 because otherwise uninformed people would say "Just upgrade to TLS 1.3" which won't fix the problem.