Hacker News new | past | comments | ask | show | jobs | submit login
Compact TLS 1.3 (ietf.org)
92 points by Tomte 66 days ago | hide | past | web | favorite | 24 comments



This is unlikely to get adopted as a standard, because it only exists in response to OSCORE[1] and EDHOC[2]. The draft is an effort to prove that the TLS working group, which almost exclusively deals with large-scale devices such as laptops, phones, and servers, will listen to and address the needs of the embedded devices (IoT) community. The challenge is that the draft didn't exist until a working group formed to create a light-weight protocol, thereby proving that the TLS working group actually won't address embedded. Whether cTLS or EDHOC become RFCs remains to be seen, but my money is on EDHOC.

[1] https://datatracker.ietf.org/doc/rfc8613/

[2] https://datatracker.ietf.org/doc/draft-selander-ace-cose-ecd...


It's slated as Informational not Standards Track.

You say that "The draft is an effort to prove that the TLS working group, which almost exclusively deals with large-scale devices such as laptops, phones, and servers, will listen to and address the needs of the embedded devices (IoT) community" but you don't present any evidence for that whatsoever. Rescorla's presentation of this at the IETF just gone didn't sound to me like he was trying to "prove" anything to the IoT community, whether to the extent that's represented by core (the working group that produced RFC 8613) or in a broader sense, it would surely make sense for anyone trying to do that to present _to_ that community and not to the TLS Working Group.

cTLS is just smaller TLS, it doesn't even say "This is for embedded" anywhere. The reason TLS itself isn't smaller is purely for interop reasons, if you don't have interop concerns (e.g. there aren't middleboxes in your network) then cTLS in principle gives you all the same benefits for less bits on the wire.


Middleboxes are a great example of why cTLS isn't the right direction. cTLS is a smaller TLS, but like TLS, doesn't deal with being proxied. This is by design, of course, because MITM is bad and proxies are MITM. However, there are no deployed wide-area low-power IoT networks I know of that do not use application-layer proxies. Even the cellular offerings I've seen are depending on data collection services as a middle man. OSCORE/EDHOC has been designed to accommodate proxies and provide a secure transport through the proxy, while still allowing the proxy to do its part. Even HTTPS CONNECT doesn't do this. And EDHOC by itself can be used in more exotic key exchange schemes, which it turns out does pop up all the time in IoT design.

So yeah, cTLS is a smaller TLS. Which like TLS doesn't meet the needs of today's IoT designers.


If an app isn't aware it's being proxied, how do you keep TLS secure but allow proxying anyway? There are already solutions for proxying apps/clients when they can be made proxy-aware. One option is to add your corporate CA to the trusted roots list (and transparently proxy through something that auto-generates certs). Another is to have the app explicitly connect to a corporate proxy.company.com site running a forward proxy (non-transparent proxying). What other solutions are necessary?

It seems like cTLS is not addressing proxy ability because there's no need to address it. cTLS is addressing power/complexity requirements only, right?

For consumer IoT devices, this is all begging the question of whether IoT devices should be directly connecting to anything external. Something like Mozilla's WebThings gateway[1] needs to catch on and become standardized. Even for IoT devices that are capable of supporting TLS to arbitrary internet endpoints, having such devices all talking to arbitrary external sites is a disaster.

[1] https://www.google.com/search?q=ycombinator+mozilla+webthing...


I agree. I'm interested in hearing more about why proxies/MITMs need to be supported as first-class actors.


Sure. The kinds of IoT devices I work with aren't consumer IoT. We operate networks of devices where a sensor may be attached to a network interface and the application driving that sensor is somewhere out on the Internet (perhaps on a VPN).

The application wants to connect to the sensor (perhaps for configuration) and to do that needs access to the network on which that device is operating. The network won't be exposed to the public Internet, and is protected via access rules enforced at a proxy, and traffic management may be done at a proxy as well. The application vendor may not trust the network operator with payloads, so wants to keep their payloads secret. So the result is a need for proxies where the payload is secret, but the destination is not, and a secure end-to-end envelope (from the application to the sensor) can be formed. TLS can't address this. Nor can VPNs.


The way I'd build this, if I had to, is to a) terminate TLS at the proxy, b) use some an interior protocol (possibly TLS) between the client and the sensor, with the client telling the sensor a digest of (or perhaps the whole) EE certificate presented by the proxy.

This might look like a CONNECT if the proxy will permit it, though using HTTPS to talk to the proxy, not HTTP, naturally. Or it might look like a full-duplex POST to the sensor through the proxy, with TLS being spoken over that POST. If neither CONNECT nor full-duplex POST are options, I might try Websockets, and as a last resort, I'd split up the protocol into multiple POSTs, one for each round trip.

EDIT: At any rate, this sounds exactly like what CONNECT is for, and this covers your needs:

  - The client trusts the proxy if it can authenticate it.
  
    The proxy is an explicit client-side proxy, though
    it's actually on the server-side, which might seem strange.

  - The client trusts the device if it can authenticate it.
  
    The proxy cares only to decide whether to route the
    request.

  - The proxy may need the client to authenticate to the
    proxy -- there exist headers for this, and the client
    could also use a user certificate in the TLS
    connection to the proxy.
  
  - The client may need to authenticate to the device,
    which it can do in the TLS connection to the device
    that runs over the outer CONNECT, with all existing
    choices for authentication available.
The only strange thing is that historically client-side proxies are accessed w/o TLS, but you really need to use TLS here because the "client-side proxy" is actually far away, across the big bad Internet. That client-side proxies are usually talked to w/o TLS is really a bug.

EDIT: If you find that getting your client to use TLS to talk to the proxy is difficult, just think how much more difficult it is to implement a new protocol from scratch.


You missed the requirement for the payloads to be secret from the proxy. If the proxy is terminating TLS, then the payload isn't secret from the proxy (and the sensor can't authenticate the source except as from the proxy, either.)

It's worth adding that in my case there is no TCP or HTTP, so the home stretch over the LPWAN is UDP and CoAP.


I did not miss that. The client runs the TLS connection end-to-end (to the device) over the CONNECT. That connection will be secure relative to the proxy.

That there's no TCP/HTTP doesn't alter any of this. You can use UDP and DTLS this way, except you'll have to create the proxy protocol, and you'll need the proxy to maintain some "connection" state, and you'll need to expose inner connection close to the proxy so it can cleanup that "connection state".


This is kind of a lot of work when there could be a protocol that I can use instead. Like OSCORE with EDHOC. Sure I can go gluing things together but that's more work for me and my people. We have other things to do like build kick ass network management tools etc.


Having had to maintain proxies, yes, it's a lot of work. But!

Building a new protocol, and implementations for it, and applications for it, has got to be strictly more work than building a nested TLS protocol and implementation. More work for you, and for the rest of us who would have been happy not to have to deal with it but are forced to by new realities on the ground created by people who thought a new protocol would be less work. More chances to get the crypto wrong. I'm not keen on this. I'm all-in on TLS 1.3, not because it's great but because it's the only way to reduce the pressure to create new legacy. Legacy is fantastically expensive, so I'd rather not create more of it if we don't have to.

To be fair, I'm told that even TLS 1.3 might not fit the power profile of IoTs, however, that's ameliorated by time, and it would also be ameliorated by cTLS, so I'm not sure that's enough to justify a new protocol.


Well, we have legacy anyway, because I can't economically use TLS or DTLS this way, so I already have a proprietary protocol. Presumably other designers are in the same position I am. And I'm not the only one looking hard at EDHOC and OSCORE, there are some pretty significant hitters in the IoT scene right now that are also looking at this and for the same reasons we are - standardized approaches in an industry sorely needing both standards and functional solutions that work on real networks.

But yeah, building protocols is hard. I should know, since I've been building a dead-end protocol for over 10 years, including HSM support, massive scale, proxy scenarios, HA support, and blistering pen tests, that I am very ready to give up for OSCORE/EDHOC.

TLS is not going to be "ameliorated by time"; RF isn't getting cleaner, radios are already very efficient, power management has gotten decent and there's not a lot left to squeeze. Putting bits in the air will continue to cost money. Embedded IoT will continue to chase the tiniest scrappiest cheapest hardware. Where do you think this "amelioration" is going to come from? TLS is going to get worse for IoT over time, not better. There will be ever more code points in TLS. The WG is going to continue to focus on big-box issues, privacy issues, server deployment issues, ever finer controls on timing attacks, extensions for SNI and other CDN issues. IoT problems will be solved elsewhere, with a different protocol, and that's fine.


I'm curious what all we could throw out in a cTLS or uTLS to serve the power/RF constrained environments of IoTs.

If we must have a new protocol, I'm keen on making it as similar as possible to TLS 1.3 that the costs of designing, reviewing, implementing, and maintaining it, are much lower than that of a brand new protocol.


There are SO MANY changes to be made for constrained devices. Just a couple quickies, certificate order is all backwards. Eliminate basically all use of DER, preferring CBOR instead. Define larger coarse protocol presets instead of dozens of optional extension points. Also TLS has no certificate caching, which for some radios is nearly fatal and will be a major issue with embedded and PQC. Etc..


CI.

I want to cache HTTPS responses to npm, Github, etc. so I have fast response times and reduce uptime dependence.

And I have no scruples about proxying my build robots.


How long does it normally take for such a draft to become a standard - 4-5 years or less?


The target track can change at any time.

The TLS WG might well adopt it.


Without commenting on the merits of this RFC, I saw

"[[OPEN ISSUE: Should we just re-encode this directly in CBOR?. That might be easier for people, but I ran out of time.]]"

IMO, yes, please. The last thing the world needs is Yet Another Custom Binary format and CBOR is a very good choice (happens to be one of my favorite encodings).


My life would be so much more pleasant if the asn1 stuff in x509 et al was in cbor.


Mmm. You obviously wouldn't build anything like X.509 today (not least because the X.500 directory system never ended up building that global directory) but over time I appreciate more and more things that I once thought were mistakes in X.509 and ASN.1.

The Distinguished Encoding for example. So easy to build a format that says well you can do X or Y and then you get nasty surprises because which is it?

Or OIDs. Sure today you figure we'll let IANA track things in a registry, or we'll do everything with URLs which are inherently namespaced, but OIDs are in many ways a nicer option than either.


I would replace OIDs with URNs. Textual identifiers are much nicer than OIDs in any APIs and in any UIs (if they must leak there, which often happens). This goes for things like GSS-API as well.

I do NOT like definite-length encoding. It's nice to have the full length of a PDU up front, but inside the PDU you want to use indefinite length encoding for everything as this greatly simplifies encoding.

Tag-length-value (TLV) is an anti-optimization that was only ever a crutch. By the time the ITU-T worked on PER this was well understood. I feel that the use of TLV encoding rules held back [open source] compiler creation for ASN.1.

Compare to XDR and rpcgen. XDR is a lot like a subset of ASN.1 with PER-/OER-like encoding that uses 4-octet alignment. XDR was long easier to use than ASN.1, but why? Mostly because a) it has C-like syntax and is fairly easy to build parsers for, and b) the encoding rules are very straightforward.

Protocol buffers makes some of the same mistakes as DER...

It's really difficult to reinvent wheels well. It really helps to not throw the baby out with the bathwater. Specifically, here, there's a lot functionality in ASN.1 the _syntax_ that would be very good to keep. As for encoding rules, XDR is dead simple, so I'd use that for almost anything.

It's also good to know what's really missing. For my money, the most commonly missing feature is control over emitted types for the host language, and, specifically, lack of a way to emit private fields that are not to be encoded or decoded but which make it easy to keep state near relevant data.

XDR is so simple that as long as you have a decent buffer abstraction and support routines for encoding/decoding scalar types, it's easy enough to do without a compiler.

The most promising newish thing here is flatbuffers.


What's wrong with TLV? It helps getting rid of end-of terminators, no?


For one, the tags aren't necessary -- you know what belongs at any point, and for OPTIONAL/DEFAULT elements all you need to know is whether they are present or absent (that's one bit, or a byte if you're not aiming to be more space efficient).

Definite-lengths greatly complicate encoding. The easiest way to encode DER is "backwards", starting the end of a SEQUENCE and working your way backwards. That's a bit of a pain.

Definite-lengths add lots of mostly-unnecessary redundancy.

For constrained-range integers, the best thing to do is to encode them as whatever number of octets are needed to encode the constrained range. For structures (SEQUENCE) and arrays (SEQUENCE/SET OF) indefinite length encoding is much easier to encode. Decoding is easy because you just decode the fields -- a terminator shouldn't be needed (though I know BER uses terminators, but that's a design flaw in the BER family of encoding rules, not a complete necessity.

Definite-length encodings or terminator encodings are mostly only needed for strings and extensibility holes (OCTET STRINGs, extensibility markers), but they can be the exception rather than the rule, and thus simplify things.


That's not going to happen. You need an ASN.1 compiler and run-time. I use Heimdal's (which comes with a nice PKIX library with a very nice API).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: