

This working group has failed - mmoya
https://www.ietf.org/mail-archive/web/tls/current/msg10598.html

======
lazyjones
Specs designed by committee usually fail when they produce just the "paper"
and no reference implementation. Look at W3C, which consistently failed to
develop their own reference browser. The same seems true for TLS, actual
implementations are too complicated because the committee paid no attention to
this. A positive example is MP3, where everyone just copied the reference
codec in the beginning.

~~~
octo_t
A good example here would be C++. Look at how long it took for C++11 to be
implemented, but now at least two compilers are feature complete (gcc + clang)
and now they're on schedule to be feature complete for C++1y[1], with
implementation + specification in sync.

[1]
[http://clang.llvm.org/cxx_status.html#cxx14](http://clang.llvm.org/cxx_status.html#cxx14)

~~~
MichaelGG
It also helps that C++ doesn't have nearly the number of implementers in
position to screw stuff up. Unlike any number of webservers, SSL libraries,
load-balancers, firewalls, proxies, etc.

How many C++11 compilers are there? This support matrix[1] seems to indicate
there are 2 compilers in the lead, with 5 providing fairly decent coverage.
How popular is XLC++?

And anyways, adding another compiler is something you choose to do, and
usually run a bit of tests for, right? Unlike networking protocols where it's
invisible to you that someone has some idiotic inspection hardware that
segfaults on certain packets and drops your connection.

Edit: Also, even if two compilers are feature complete, it doesn't mean they
implement the features in perfectly compatible ways, right?

1:
[https://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport](https://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport)

------
MichaelGG
Here's an interesting post about TLS compatibility[1]. I guess it explains why
no browsers have had TLS 1.2 on by default for such a long time.

" To add to this discussion about protocol version intolerance, I've been
tracking this problem in my SSL Pulse data set (SSL servers from the Alexa top
1 million).

Here's what I have for November:

    
    
      Total servers: 163,587
    
      TLS 1.0 intolerance        9
      TLS 1.1 intolerance    1,388
      TLS 1.2 intolerance    1,448 (~ 0.9%)
      TLS 1.3 intolerance   17,840 (~10.9%)
      TLS 2.98 intolerance 122,698 (~75.0%)
    
      Long handshake intolerance: 4,795 (~2.9%)

"

1: [https://www.ietf.org/mail-
archive/web/tls/current/msg10657.h...](https://www.ietf.org/mail-
archive/web/tls/current/msg10657.html)

~~~
bradleyjg
There's a trade-off to be made. On the one hand browser users would like every
page to "just work". On the other hand they want secure connections to
actually be secure.

If those .9% of websites break in the latest versions of firefox, chrome, and
IE they are more likely to be fixed than if they are coddled through some
workaround or even worse by holding back general progress. The former is
better for web security. On the other hand, people who want to go to those
websites in the meantime will be inconvenienced.

Perhaps a compromise is to build the workaround, but put in an interstitial
scare screen. That might generate the desirable social pressure on the website
owner without making it impossible to visit.

~~~
gsnedders
> On the other hand, people who want to go to those websites in the meantime
> will be inconvenienced.

…and they may well move to another browser which doesn't support TLS/1.2, or
stay on an out-of-date (insecure!) version of the browser, which doesn't help
web security either.

------
y0ghur7_xxx
Looks like it's down.

Cache:
[http://webcache.googleusercontent.com/search?q=cache:https:/...](http://webcache.googleusercontent.com/search?q=cache:https://www.ietf.org/mail-
archive/web/tls/current/msg10598.html&hl=en&strip=1)

------
makomk
Given recent revelations, one has to wonder if the working group merely failed
by itself or was given a substantial nudge in that direction by someone who
wanted TLS to be insecure.

~~~
MichaelGG
Apart from specific contributions by the NSA (if any?), is it hard to believe
people screwed up? Look at HTML standards and what a mess that is. Hell, look
at HTTP and the insane stuff in that spec.

I'm guessing spec writing is much harder than one might assume, especially if
you're not writing elegant code to implement the protocol at the same time.
Committees only make things worse, and the IETF RFC format doesn't help
either. (Reading the SCSI specs, by comparison, with their nice graphical
diagrams, is much cleaner - not that a simple RFC can't convey the same info,
but nice diagrams do really help.)

Extensibility is something people get wrong all the time - the TLS 1.2 issue
seems to be that enabling TLS 1.2 ends up breaking lots of users.

Without someone familiar with the subject pointing out exact contributions
(I'd assume repeated issues raised by the same group of people), normal
incompetence is more than enough to explain things, isn't it?

~~~
gsnedders
Spec writing is only half the challenge (and arguably a relatively easy one!).

A big problem with HTML and HTTP is the fact that error handling has always
been undefined (the current HTML spec is the first real attempt to define
error handling!) — so different people implement different things, and then
when you end up with a single implementation with ~90% marketshare, everyone
has to reverse-engineer that (and probably not perfectly!). Combine this with
the fact that the majority of implementers of HTML and (to a only slightly
less extent) HTTP are web developers who just want to get stuff working —
they've never read a page of the spec in their life. So, guess what, they end
up relying upon client's error-handling (typically the intersection of clients
they care about — which when IE had 90%+ marketshare was often just IE, then
see above about reverse-engineering).

The other big problem is the lack of generic test suites for the standards —
far too often each implementer ends up writing their own tests, and then not
sharing them (the IETF still has no real general infrastructure for hosting
testsuites for RFCs!). At least around the W3C, there's been a relatively
large movement in the past three years or thereabouts towards developing
shared testsuites, in large part down to various Microsoft and Opera people
(myself included, in the early days, as a disclaimer) and several WG chairs
(trying to push specs to REC — which nowadays requires two interoperable
implementations, and hence practically a testsuite), so things are at least
slowly changing there. But there's still a lot of work to be done — and it's
one area where improvements can have large effects, as it increases the
consistency of all implementations, and makes it easier for smaller, and
newer, competitors to enter the market.

------
vacri
Following the thread, the TLS 1.2 spec was completed in 2008, but it wasn't
supported in OpenSSL until mid-2012 - so anything that depends on OpenSSL had
to wait until at least then, then go through the implentation and reshipping,
then trickle on down to the end vendors. And with no-one using TLS 1.2 or
having a need for it because it wasn't available, it was back-burnered by the
browsers.

The follow-up comments paint a much fuller picture of why things are delayed,
where the failures are, and what's going on.

~~~
makomk
If you read the follow-ups, it's not just that no-one used TLS 1.2 because it
wasn't available, enabling support for it actually broke stuff for end
users[1] - and the browser developers knew this was almost certain to be the
case even without looking, because it almost always is.

Also, as the linked email points out, we shouldn't have needed TLS 1.2 in the
first place in order to be secure. It was already known at the time TLS 1.0
was designed that they were doing things in ways likely to be insecure for no
good reason, but they did it anyway.

[1] [https://www.ietf.org/mail-
archive/web/tls/current/msg10614.h...](https://www.ietf.org/mail-
archive/web/tls/current/msg10614.html)

~~~
fulafel
Buggy Middleboxes broke. They're increasingly preventing deployment of new
protocols and apps on the internet.

------
parennoob
Maybe making things more intelligible would help instead of using language
that is extremely obfuscated and confusing, and unaccompanied by any actual
mathematics?

Take this sentence from the email for instance:

"Even AES-GCM got screwed up: nonces should be counters, but all
implementations make them random, introducing an artificial birthday bound
issue due to truncation in the standard."

I have no idea WTF this means, but let's go over it:

nonce: I know this is a randomly generated number that can be only used once
-- now why should it be a counter? No idea.

"but all implementations make them random": wait, aren't they supposed to be
random by definition? According to the above line though, they are supposed to
be random. Damn, what I knew must be wrong. I wonder if this person on the
internet has submitted some sort of explanation about this somewhere.

'artificial birthday bound issue': Assuming this refers to the birthday attack
([http://en.wikipedia.org/wiki/Birthday_attack](http://en.wikipedia.org/wiki/Birthday_attack)).
Why is it "artificial"? Can we see some mathematical proofs attached please? I
sort of get the idea here -- because the nonce is random, it is vulnerable to
being recreated after a certain number of attempts, but there is nothing
concrete attached here. Or I could be totally wrong in this interpretation.
God knows, and maybe this chap.

"...due to truncation in the standard." \-- Do you mean some sort of
_mathematical_ truncation, i.e. "my number was truncated to 16 bits", or
truncation of the standard itself "the last section of the standard was
removed"? Please be clear.

Same goes for most things related to crypto -- if you want stuff like TLS to
be examined by more eyeballs and find more bugs, you have to first try and
make it more accessible. The sentences above are, in my opinion, a complete
communication failure.

~~~
bodyfour
If you choose a random nonce then it follows that a nonce could be randomly
reused. If there are N possible nonces this will happen on average after
approximately sqrt(N) packets. If you use a counter as a nonce it will only
repeat after N packets, no matter what.

For some algorithms a simple incremented value is all that's needed "1, 2, 3,
..." but this means an attacker seeing only two packets can at least estimate
how quickly packets are being sent. However, if you encrypt this stream of
incrementing numbers with a constant symmetric key you get the best of both
worlds: a nonce stream that looks random but is guaranteed not to repeat until
after each possible value has been used. Usually when crypto people talk about
a "counter" this is the technique they're referring to.

~~~
parennoob
Thanks! Perfect explanation of the 'counter' term, makes sense. :) Upvoted.

