Hacker News new | past | comments | ask | show | jobs | submit login
The First Few Milliseconds of an HTTPS Connection (2009) (moserware.com)
190 points by _pn3l on Oct 21, 2020 | hide | past | favorite | 35 comments



Please note that this is from 2009 and a lot of things have changed. If you read it consider it a history lesson of how TLS was in 2009, not how it is today.


It would be useful if you were to list at least some of the things that have changed.


The most obvious thing being that it describes primarily RSA-handshake and RC4 encryption. For good reasons these aren't used any more.

But there's plenty of tiny details that are obsolete as well, e.g. it describes the timestamp which is no longer used and usually filled with random data these days.


It's a shame the clients timestamp is no longer sent to the server. If it was, a server could compensate for a client clock being wrong by many decades by providing an old expired certificate.


Why client does not accept certificate from the future? What attack is supposed to be prevented?


X.509 cerrificates have a not before and a not after date, so validators check them. It's a real pain when you have to let a certificate sit for a day to compensate for clients with poorly set clocks or poorly written date comparisons.


Google's experiments suggest that the vast majority of clients are off by an hour (incorrect "daylight saving" adjustments / next door's timezone) or less. They did do work on having clients with Internet access just bootstrap a roughly correct time from first principles, but I don't know if Chrome/ Android do this out of the box.

Public Issuers generally backdate certificates by a small amount of time to allow for that one hour. This is permitted (unlike backdating to avoid other rules) by the Baseline Requirements, so you should find you don't need to wait before using the certificates you are issued.


> so you should find you don't need to wait before using the certificates you are issued.

Thanks, but no. I had to wait about 12 hours to deploy replacement certificates after Heartbleed, in order to balance cert errors with loss of privacy. A large number of the clients I had to support interpret the dates as local time, which isn't helpful when they're in the western hemisphere and GoDaddy wouldn't backdate certs at all. Last I had a cert issued, DigiCert set not before to the order time even for re-issues or duplicates, so that was a lot easier to manage.


This is true, but doesn't actually answer the question of what attack respecting the not-before date is supposed to prevent.


I think the 'attack' is allowing certificate issuers to work around validity rules by issuing a certificate that begins in 2098 and ends in 2099.


If this was the behaviour of an app up to 2009 you could at least serve certificates to those apps still.


This is one of the reasons I still use http primarily and do not promote https.

There are many barriers to an https connection, and this is one of them.

Another is the version race, meaning browsers as recent as 2012 are no longer compatible with today's https sites.


While I understand where you are coming from, and in fact use http on my personal site, I have to disagree about which is better.

For the Web (Meaning HTTP/S), compatibility back past 5-6 years seems kind of pointless. There's to much changing to fast at too many layers to be a really good target, unless you are serving an actually static site (Which is not a bad thing, it's what I do). Even then, HTTPS still has value, as it can prevent ISP's from injecting advertisements. I don't believe this is currently a large issue in the US, but I remember a recent discussion about bad behaviour by ISP's in India and other countries.

Edit: I do however believe that (for a static site on https) it is a good idea to allow a fall back to http, assuming of course that you are fine with the contents being mitm'd. Also, I think the Gopher is a good thing to serve in this kind of situation.


For TLS 1.3 (a large fraction but not majority of today's HTTPS servers) let's go step-by-step like the article:

Random: Now all 32 bytes are random

Session ID: This will now all be random if we've never talked to this server before or if we believe it speaks TLS 1.3

Cipher suites: There are newer suites [with much shorter names] for TLS 1.3 because some parameters are now fixed.

SNI: This is still there (work on Encrypted SNI which became Encrypted Client Hello is ongoing)

NEW Supported versions: Compliant servers pick the newest version they recognise from a list the client provides.

NEW Key share: A TLS 1.3 client proposes one or more (Elliptic curve) Diffie Hellman key agreements. Most servers will accept one of them.

OK, on to the Server Hello...

Random: Again all 32 bytes are random unless the server speaks TLS 1.3 (or newer) and has the impression at this point that the client does not. In this case the server overwrites a few of the bytes with the ASCII "DOWNGRD" (downgrade). A TLS 1.2 (or older) client sees no significance in this, but a TLS 1.3 client is alerted that somebody is meddling, because it's extremely unlikely that a server randomly spells "DOWNGRD" and also only knows an older protocol version.

Also, as a special case TLS 1.3 servers scribble over the entire random byte sequence with SHA256("HelloRetryRequest") if they wish you to try TLS 1.3 hello again with a different key share because all your proposals were unsuitable.

Session ID: The random nonsense from the client is echo'd back in TLS 1.3

Cipher suite: Still whichever suite the server picked.

NEW Supported versions: TLS 1.3 or better servers will echo back versions they support.

NEW Key share: Assuming the Client's proposed key shares are acceptable in TLS 1.3 the server answers with a share of their own here and then...

In TLS 1.3 and newer everything else is encrypted. Client and server have performed DH key agreement, so now they both know a random secret which they will use to begin encrypting any subsequent messages.

Certificate: is an example of something that's now encrypted. Also in TLS 1.3 the specification follows real world practice that this isn't necessarily a "chain" just one leaf certificate plus any other certificates that might help the client reach a trust decision.

The TLS 1.3 protocol is a relatively nice shape in its natural form, but what you see on the wire is spelled very weirdly because this is the only way for it to pass various rusty middleboxes and not blow up poorly implemented legacy servers out there in the real world.


Awesome, thank you. This is one of those HN comments that go in my notes beside the original article.


Thanks for taking the time to share that!


I guess they're referring to http/2, http/3 and quic?


I missed that at first and when I scrolled down and saw TLS 1.0 I thought, "Wait, Amazon is using WHAT?"


I like how this ironic sentence is still 100% valid in today's world, 10 years later (2009-2020):

> This is a way to tell Amazon.com that our browser is trying to reach https://www.amazon.com/. This is really convenient because our TLS handshake occurs long before any HTTP traffic. HTTP has a “Host” header which allows a cost-cutting Internet hosting companies to pile hundreds of websites onto a single IP address. SSL has traditionally required a different IP for each site, but this extension allows the server to respond with the appropriate certificate that the browser is looking for. If nothing else, this extension should allow an extra week or so of IPv4 addresses.

Extra week? More of a few extra decades!


The Illustrated TLS Connection guide is also really useful

https://tls.ulfheim.net/


https://howhttps.works/ also interesting


NB Like this 2009 article, that cartoon is explaining a mechanism which is now obsolete. Your browser still knows how to do things that way (for at least a while yet), but it would rather not because it's less safe for you.

The TLS 1.3 walk through somebody else linked represents more or less what an actual browser does when talking to many popular sites, and even though TLS 1.3 isn't a majority of sites yet the behaviour for most (but not all) TLS 1.2 sites now more resembles that than these older articles in crucial ways.

Most essentially, we do not do RSA kex (client picks random secret, encrypts it with RSA, sends it to the server, thus implicitly verifying the server knows the RSA private key) unless that's the only permitted way to get access. For whatever reason people like explaining RSA key exchange, long after we don't like using it because it isn't Forward Secret.


This is such a beautiful and cute comic! I love it!


And up to date (TLS 3.0, which is massively different)


There is no TLS 3.0. The published standard is TLS 1.3, even though it's fundamentally quite different from TLS 1.2 and in the end it wasn't even possible to re-use the version system from TLS 1.2, the standard is still named TLS 1.3


Brainfart. I remembered the 3, and misplaced it.


I like how the blog page itself is on http.


HTTP and HTTPS are not mutually exclusive. They're synergistic. The idea that a hypothetical downgrade attack is enough justification to kill off HTTP everywhere is incredibly business-centric and stupid. Most sites are not transacting currency or private information.

A centralized cert is just that, centralized. Even if it's LetsEncrypt that's still a single point of failure for organizational corruption (ie, what happened to dot org), accidents (as cert providers mess up regularly) and government control. It's a long leash but if HTTPS is the only option (as is the trend) it's still a leash.

Almost all sites should be HTTP and HTTPS.


> Most sites are not transacting currency or private information.

> Almost all sites should be HTTP and HTTPS.

This. Most people aren't even aware that Google.com is still both HTTP and HTTPS; you can easily test with curl or lynx to verify yourself. If your browser doesn't support HTTPS (e.g., maybe its blocked in your library or another free wifi), then Google.com would still work just fine. (Another question is that many other sites aren't as easily available, sadly.)


Yet chrome doesn't let others on the HSTS preload list if you don't force all users onto HTTPS.


It is also served for https




(2009)


Added. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: