Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.
NGINX detects attempts to use http for server blocks configured to handle https traffic and returns an unencrypted http error: "400 The plain HTTP request was sent to HTTPS port".
Doing anything other than disconnecting or returning an error seems like a bad idea though.
Theoretically it would be feasible with something like STARTTLS that allows to upgrade a connection (part of SMTP and maybe IMAP) but browsers do not support this as it is not part of standard HTTP.
It actually is part of standard HTTP [0], just not part of commonly implemented HTTP.
The basic difference between SMTP and HTTP in this context is that email addresses do not contain enough information for the client to know whether it should be expecting encrypted transport or not (hence MTA-STS and SMTP/DANE [1]), so you need to negotiate it with STARTTLS or the like, whereas the https URL scheme tells the client to expect TLS, so there is no need to negotiate, you can just start in with the TLS ClientHello.
In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity. We use this trick to multiplex DTLS/SRTP/STUN and it's somewhat tricky to get right [2] and places limitations on what code points you can assign later. If you wanted to port multiplex, it would be better to do something like HTTP Upgrade, but at this point port 443 is so entrenched, that it's hard to see people changing.
> In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity.
Exactly my original point. If you really understand the protocols, there is probably zero ambiguity (I'm assuming here). But with essentially nothing to gain from supporting this, its obvious to me that any minor risk outweighs the (lack of) benefits.
You can in fact run http, https (and ssh and many others) on the same port with SSLH (its in debian repos) SSLH will forward incoming connections based on the protocol detected in the initial packets. Probes for HTTP, TLS/SSL (including SNI and ALPN), SSH, OpenVPN, tinc, XMPP, SOCKS5, are implemented, and any other protocol that can be tested using a regular expression, can be recognised
how is it more of a security issue than exposing the same services on other ports? Seems to me it’s actually better dont-call-it-security-through-obscurity?
I think what I had seen before was replacing the http variant of the "bad request" page with a redirect to the HTTPS base URL something akin to https://serverfault.com/a/1063031. Looking at it now this is probably more "hacky" than it'd be worth and, as you note, probably comes with some security risks (though for a local development app like this maybe that's acceptable just as using plain HTTP otherwise is), so it does sense that's not an included feature after all.
In general the way that works is user navigates to http://contoso.com which implicitly uses port 80. Contoso server/cdn listening on port 80 redirects them through whatever means to https://contoso.com which implicitly uses 443.
I don't see value on both being on the same port. Why would I ever want to support this when the http: or https: essentially defines the default port.
Now ofcourse someone could go to http://contoso.com:443, but WHY would they do this. Again, failing to see a reason for this.
The "why/value" is usually in clearly handling accidents in hardcoding connection info, particularly for local API/webdev environments where you might pass connection information as an object/list of parameters rather than normal user focused browser URL bar entry. The upside is a connection error can be a bit more opaque than an explicit 400 or 302 saying what happened or where to go instead. That's the entire reason webservers tend to respond with an HTTP 400 in such scenarios in the first place.
Like I said though, once I remembered this was more a "hacky" type solution to give an error than built-in protocol upgrade functionality I'm not so sure the small amount of juice would actually be worth the relatively complicated squeeze for such a tool anymore.
Do any real web servers support this?
Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.