Hacker News new | comments | show | ask | jobs | submit login

There are smart people working in the TLS WG, but there are also people there that shouldn't be governing the development of the Internet's most important encrypted transport.

More importantly, the working group appears to be geared towards saying "yes" to new features, when exactly the opposite posture is needed. The Heartbeat feature is just the most obvious example, because its reference implementation was so slapdash.

The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?




I have read all the TLS WG discussions I can find on this RFC. The picture that emerges for me is that of an inherent, institutional failure in the nature of working groups.

As far as I can tell, there was only one person that raised the "we don't need this" objection (Nikos Mavrogiannopoulos, GnuTLS principal author). He asked for a defense of the feature on three occasions [0] [1], and received a response twice, both responses from the same person. There was no further discussion, so either his objection was addressed or he felt further discussion would be unproductive. I would be interested to know which. GnuTLS does ship with heartbeat disabled, so it's possible Nikos continued to express skepticism about the feature.

However this document was eyeballed by many security experts, who I will not name and shame here, who proofread or made comments or edits to the document, without objecting to the general premise.

It seems to me that, inherent in the nature of a working group, the specific parties who are in favor of the feature will always be better represented than the interests of a wider audience that faces the nonspecific harm that the feature might be implemented poorly and introduce a security vulnerability. Sort of a reverse-NIMBY situation if you like. The software community cannot mobilize to combat every new TLS proposal on the grounds of nonspecific harm, but everybody with a pet feature can mobilize to make sure it is standardized.

[0] http://ietf.10.n7.nabble.com/tsv-dir-review-of-draft-ietf-tl...

[1] http://ietf.10.n7.nabble.com/Working-group-last-call-for-dra...


Bear in mind, the feature itself isn't insecure. The experts who OK'd the feature failed to police an unstated and controversial norm for the group (that it should default to "no" answers). It's hard to fault people for that. I agree: the problem is the IETF process.


> The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?

You're essentially asking the same exact question Theo de Raadt is asking, hence his extreme suspicion of IETF.


I know, I'm agreeing with him.


Presumably people want TLS heartbeats for TCP because, by default, on Linux, TCP keep alive doesn't kick in until the connection has been idle for 2 hours.

Why write a call to setsocketopt() when we can just reinvent new features?


It's worse than that. Keepalives are only really useful for (a) long-lived connections that (b) are expensive (usually: requiring manual intervention or renumbering) to reestablish.

Those parameters describe no connection made by browsers.

In other words: application-layer keepalives are only valuable to applications that already have the freedom to define their own truly application-layer keepalive anyways.


And that still leaves the question of why spec a payload (much less 64k "for flexibility") in a TCP heartbeat exchange.


Because it's the same heartbeat message used for DTLS, where the heartbeat and padding allows for variable-length probes with request and response having varying size.


That is understood, Tom. The question remains why spec the same for two distinct transport layer protocols.

[edit: actually I was under the impression that the payload addressed response order concerns in UDP.]


(It's Thomas). Because it would have made even less sense to define a TLS-specific heartbeat and a DTLS-specific heartbeat.

In the hierarchy of sensible TLS decisions, you have, from most to least reasonable:

1. Not adding new heartbeat extensions to DTLS or TLS.

2. Adding new heartbeat extensions to DTLS only.

3. Adding the same new heartbeat extensions to DTLS and TLS.

4. Adding two different new heartbeat extensions, one for DTLS and the other TLS.


Not that I'm disagreeing with you, but aren't SSL connections rather expensive to establish because of the public key encrypted key exchange? Ofcourse anno 2014 that doesn't matter, but the whole library seems a bit engineered for 1998 when establishing SSL was probably a pretty significant thing.


They should be regarded as expensive today, because key exchange is one of distinct parts of the attack surface of an SSL implementation. The less often this exchange is visible to eavesdroppers, the better.


Repeated TLS reconnections do not necessarily invoke the entire key exchange.


.. I don't think that's a very realistic concern, is it?


Aren't websockets ping/pong messages similar?


Websockets messages are happening two layers up from TLS.


IOW: totally worthless.

The penalty for security fails of useless features should be a slow, painful, humiliating death.


You still need them for UDP.


Would you please link to this supposedly slapdash reference implementation? Taking RFC6520 at face value, it seems fairly reasonable and not out of line with other existing heartbeat protocols.

I've not read as much as I would like on the IETF's involvement in this but as I read the situation, Theo has just hurled vitriol at them for specifying a completely reasonable feature that the openSSL team implemented incredibly poorly.


It's not a completely reasonable feature.

It's a feature that had a sensible use case in DTLS (datagram TLS, TLS not run over TCP). It's unclear as to whether the use case was best addressed at the TLS layer, whether it could have been specified as something run alongside TLS, or whether it was something that applications that cared about path MTU could have handled for themselves.

The TLS WG actively discussed whether Heartbeat had reasonable use cases in TCP TLS. Some people agreed, some disagreed. Nobody threatened a temper tantrum if the feature was added to TCP TLS. Therefore: the extension was specified for both TCP TLS --- which does not need extra path MTU discovery, and which already has a (somewhat crappy) keepalive feature --- and DTLS.

The larger problem is in treating TLS like a grab-bag of features for corner-case applications.


Okay, I definitely agree regarding crufty specs. The decision to include heartbeats could easily have left left it as a variant feature rather than core spec.

I still don't agree with the OpenSSL implementation being reference spec if you were refering to that as pygy_ pointed out. Unless the IETF released that code as an institution, I would consider it the same as any other 3rd party implementation - ie. not necessarily correct. How am I to know that the RFCs author wrote it unless I go digging? Why should I trust anything they wrote which may or may not have gone through any rigorous checking?

This isn't so much directed at you, tptacek (since your pedigree is well known), as it is the others bashing the IETF for implementation flaws - bash them for what they actually did and perhaps instead of getting angsty at the powers that be, try getting involved in projects like this if you believe they are so important.


Last line, can we make that 72 pt font and top of the page?

(HN feature request: soundbits.)

This whole approach to diluting ostensibly one of the most important standards is beyond unprofessional, it's an example of failures on many layers of meta.

I would really like to see a compelling alternative to keep TLS honest, perhaps with Convergence-like ideas and bare-bones level of features.

The problem is always adoption, corporate fear of change ($upportabiĀ£itĀ„) and endpoint adoption, but the threat of an alternative might be enough to scare the bejebus of the WG back to task.


The reference implementation is the code that ended up in OpenSSL.

It was actully included in OpenSSL before the RFC was published.

Both the RFC and the code were written by the same person (who denies planting the bug intentionally).


Was that code ever actually sanctioned by the IETF and included in their resources though? SSL had SSLREF but I've never seen (and didn't find anything just now) a similar thing for TLS.


I won't judge whether the heartbeat feature itself is reasonable or not. But the inclusion of a payload was unnecessary, and specifying that it has to be copied to the reply message is so useless as to defy description. "Flexibility" my ass. It's downright suspicious.


The payload is meant to let you distinguish responses to different Heartbeats, right? I'm no expert, but sounds reasonable enough. I could see a lower limit on max payload size, but could you please elaborate a more detailed explanation why it's unnecessary and useless?


Okay, that explains things a bit. Could be done with a fixed-size payload though. Also I understand that the flexible size is intended to help with MTU discovery, but I think that would be a separate thing from a heartbeat.


Why would you need to send multiple different heartbeats over one connection? It's for keeping the connection alive.


If you are using data-grams (i.e., UDP) you have no guarantees of ordering or delivery, so you have no way to determine which transmission the echo reply you just received corresponds to without a payload that is returned to you.

Now, a 64kbyte payload, that's unnecessary for simply making each packet unique. That size was likely chosen to allow for the path MTU discovery aspects.

One could argue, however, that "keepalive" and "path MTU discovery" should not have been commingled, but they were.


Ping manages the same thing by incrementing an integer. For example.


No, ICMP echo request packets have variable-length payloads, which the receiver copies into the echo response.


Not much more suspicious than ping(8)


I always assumed ping's payload was for detecting packet fragmentation/size limit issues and as a trivial mechanism for spotting things like packet corruption.


Is it that debate (even as devil's advocate) is structurally limited when participants self-select for willingness to act?

That dynamic reminds me of the bias labeled "sympathetic point of view" as described in the essay "Why Wikipedia is not so great": "Articles tend to be whatever-centric. People point out whatever is exceptional about their home province, tiny town or bizarre hobby, without noting frankly that their home province is completely unremarkable, their tiny town is not really all that special or that their bizarre hobby is, in fact, bizarre. In other words, articles tend to a sympathetic point of view on all obscure topics or places."

That is surely not how we want decisions made about TLS.


TLS is the zombie emperor of design-by-committee feature creep.


so -- would it be reasonable to create a minimal implementation of ssl focused on web browsers and web servers? How much work is it to create such a minimal implementation that is reasonably performant, written by competent devs, well tested, fuzzed, etc? Just because random shit gets thrown into the protocol that doesn't mean it needs to be implemented in my copy of chrome or nginx...


OpenSSL itself is highly configurable, so I think it would be better to pare it down to a minimal build that is both secure and compatible with all widely deployed TLS implementations.


I do this already on OSX, and even submitted a patch to allow Ruby to run without OpenSSL engine support. [0]

How I compile OpenSSL. [1]

[0] https://bugs.ruby-lang.org/issues/9714

[1]

    ( export CONFIGURE_OPTS='no-hw no-rdrand no-sctp no-md4 no-mdc2 no-rc4 no-fips no-engine' \
      brew install https://gist.github.com/steakknife/8228264/raw/openssl.rb )


That looks like a good start, but can we drop even more? DES? Camellia? IDEA? MD5?


Good start my ass, you haven't actually tried any of your "suggestions," have you?

no-md5 doesn't currently work:

    "_EVP_md5", referenced from ...

no-des doesn't currently work either, the following test fails the build:

    enveloped content test streaming S/MIME format, 3 recipients: generation error


ime, it's much harder to write, understand, test, and debug #include soup


Not really. There are only a handful of features that are necessary. Furthermore, JEOS-like minimal systems should only enable features that are absolutely necessary. Minimalism means a smaller attack surface.


It's certainly not without its problems (see, e.g. news in the last few weeks) but GnuTLS is (in my opinion, of course) technically superior to OpenSSL. Unfortunately, its license will prevent more widespread adoption.


Just go read polarssl docs and fuzz that.

Its small modular documented and tested.

We would all reach further by improving polarssl than starting another library.


unfortunately it's licensed gpl2, so no love. which is too bad because I agree.

https://github.com/polarssl/polarssl/blob/master/library/ssl...


It's actually GPL-2+, but yes, this unfortunately limits the scope of outside contributors.


Hmm. Why does it say Dual-Licensed in the description?


Because you can also pay them for a commercial license.


What is wrong with gpl v2?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: