
The modern HTTPS world has no place for old web servers - dredmorbius
https://utcc.utoronto.ca/~cks/space/blog/web/HTTPSNoOldServers
======
Wowfunhappy
Prior to last year's release of macOS Catalina, OS X shipped with a Dashboard
widget to display the weather forecast. In 2019, Apple broke this widget by
turning off the servers it used.

Luckily, Dashboard widgets are just editable html and javascript files, so I
rewrote a portion of Apple's weather widget to use the DarkSky API instead.
Since the entire point of this project was to support a legacy feature (the
Dashboard), I really wanted it to work on the full gamut of OS X 10.4 Tiger –
10.14 Mojave.

My modified version worked fine on 10.9 and above, but on 10.4 – 10.8, users
reported being unable to retrieve any weather data. After some back and forth
of looking at logs, I found the problem—old versions of OS X didn't support
the modern iteration of HTTPS required by DarkSky. I couldn't fix this,
because DarkSky doesn't offer their API via HTTP.

Was this really necessary? Weather forecasts are public information, so what
level of safety is provided by HTTPS to the point where it should be not just
a default, but the _only_ option for developers?

(Luckily, I had the chance to rectify this when Apple bought DarkSky and
forced me to change APIs. I'm now using Weatherbit, which offers HTTP. I would
have preferred to use HERE, but like DarkSky, they're HTTPS-only.)

[https://forums.macrumors.com/threads/i-fixed-apples-
broken-w...](https://forums.macrumors.com/threads/i-fixed-apples-broken-
weather-dashboard-widget.2225715/)

~~~
earthboundkid
Fetching a weather forecast reveals that I was in a certain location.
Depending on circumstances, that could easily be incriminating evidence. Yes,
Dark Sky should be HTTPS by default.

~~~
cguess
Also could imply where you're going to. Looking up the weather in London (if I
don't normally) almost certainly entails a trip.

This is pretty deep down the threat modeling hole though, and no, normal
people don't need to care about this.

~~~
SilasX
What, you don't bombard weather APIs with dummy requests about random cities
so as to obscure where you might be going?

~~~
brewdad
When baseball season is actually occurring I will sometimes look up weather in
a half dozen cities around the country to get an idea of which games might get
rained out. Not exactly the same thing but I'm not actually traveling to any
of them.

------
pmlnr
Everyone who adds "certbot is EZ" comments: it makes you rely on a service.
This is the very problem of certificates, because self-signed was never
trusted and is not an option.

HTTP isn't supposed to rely on external services that can die, cut you off,
block you, etc.

This is not a matter of how easy it is to align with the current browser
dictated situation.

EDIT: LoTR typo (elf vs self) fixed.

~~~
mschuster91
> This is the very problem of certificates, because self-signed was never
> trusted and is not an option.

What I don't get, with DNSSEC getting rolled out more and more, why can't
every site operator simply stick the fingerprint of their HTTPS cert in DNS
and have DNSSEC take care of the trust chain?

~~~
josteink
> What I don't get, with DNSSEC getting rolled out more and more, why can't
> every site operator simply stick the fingerprint of their HTTPS cert in DNS
> and have DNSSEC take care of the trust chain?

This question is extremely relevant given how DNS-based authentication is the
preferred way to authenticate with letsencrypt.

If DNS is good enough for letsencrypt to validate you and give you a cert, why
can't DNS be good enough for the browser alone, without needing to rely on
some third party service?

Letsencrypt has certainly made HTTPS easier in practice, but we really need to
get rid of the whole CA-model entirely.

~~~
mytailorisrich
What Letsencrypt does (and all other providers of certificates to end users
do) is verify that you control the domain you are requesting a certificate
for.

That's well and good, but who are those Letsencrypt guys, anyway? That's why
there is a chain of trust and Certificate Authorities.

You, and your browser, cannot know everyone out there so you pick a few
'authorities' that are well-known and deemed serious enough to be trusted and
go from there.

DNSSEC operates on the same principle of a chain of trust from root.

~~~
xoa
But isn't the point here (and a source of confusion I share with all the other
posters) that DNS is the actual root of trust? Heck, forget DNSSEC even,
because it's not like Let's Encrypt demands it. You say that LE verifies

>that you control the domain you are requesting a certificate for

but it mostly seems to do it the exact same way any random person browsing to
an HTTP site would: it checks the DNS records. It's not like there's an out of
band channel here where they actually verify business records separately or
something. So what exactly is the value of the middle man in this? Why not
just have public sigs in the DNS records too? When DNS is the root of trust
anyway and "Certificate Authorities" are reduced to fully automated systems,
what value to "Certificate Authorities" even bring anymore in this context? It
seems like a hack from a time when CAs were expected to provide some out of
band verification that would actually be useful. But on the web that mostly
hasn't happened or has been rendered moot (witness the death of EV).

I can see how central CAs could still be useful in many other circumstances.
But for anything where control of DNS directly correlates with control of the
entire chain, it seems like it'd be a lot better to just decentralize into
DNS.

~~~
mytailorisrich
The value of certificate authorities is to be the roots of trust, as said.

The exact same applies to using DNS as chain of trust. You have to start with
a well-known root of trust because it's impossible to know all the DNS servers
or registrars out there. In fact that's exactly how DNSSEC works.

It seems that the question is whether DNS and HTTPS certificates are
converging to provide the same service. Perhaps, though I'm not sure, but that
wouldn't change the system fundamentally.

~~~
xoa
> _The value of certificate authorities is to be the roots of trust, as said._

But they aren't, DNS is. That's my question. If someone controls my domain,
they can point it wherever and get all the Let's Encrypt CA signed certs they
want. So how exactly is the CA being a _root_ of trust there if the CA itself
is basing trust off of domain control? In neighbor comment it seems that maybe
the CA is basically acting as a hack to bypass an inability by clients to
check DNS? I can see why that would have some practical value in the near term
but it'd be good to do away with it as soon as possible.
Apple/Google/Microsoft (and maybe Mozilla) may be in a position to do so if no
one else.

~~~
mytailorisrich
This is another question.

You're discussing how to prove to Let's Encrypt, or anyone else, that you are
the legitimate owner of a domain.

That does not mean that I know or trust Let's Encrypt. The root of trust is an
entity I know and trust and which can vouch for Let's Encrypt, which can in
turn (or not) vouch that you are the legitimate owner of a domain.

The same applies to DNSSEC. The root of trust being the root servers.

------
tleb_
> In the era of HTTP, you could have set up a web server in 2000 and it could
> still be running today, working perfectly well (even if it didn't support
> the very latest shiny thing).

I'm not sure this assumption is right, looking at the CVE list and in
particular entries associated with Apache.

[http://cve.mitre.org/cve/](http://cve.mitre.org/cve/)

[https://www.cvedetails.com/product/66/Apache-Http-
Server.htm...](https://www.cvedetails.com/product/66/Apache-Http-
Server.html?vendor_id=45)

~~~
fomine3
I believe OP want to say that it just not stops working

~~~
l0b0
I think the GP comment demonstrates that it very likely _will_ stop working
very shortly if left on the open web.

------
p49k
It takes 10 minutes to put an old HTTP server behind a reverse proxied nginx
instance with auto-renewing letsencrypt certs (you can even reverse proxy old
HTTPS-supporting servers that have outdated TLS versions).

I also disagree with the idea that HTTPS introduced the problem of requiring
maintenance; anything you expose to the public internet needs maintenance and
security upgrades.

~~~
yc12340
> anything you expose to the public internet needs maintenance and security
> upgrades.

Uh, no — in the past you could have purchased a hosting solution, that does
it's own TCP termination (such as shared hosting) to completely forget about
your site. Many static websites still exist solely because of that.

Now you can accomplish the same thing by paying Cloudflare (or some other
party) to terminate your TLS connection. One more kind of parasites has been
enshrined in technological stack.

~~~
Sebb767
You can also pay Hetzner, Wordpress, 1&1, AWS or any other hosting provider,
most of which also offer automatic certificate renewal by now. I don't see how
there is any change, especially since hosting has gotten a lot cheaper.

~~~
crazypython
Do they offer automated certificate renewal for services with constantly
changing IPs? I don't think so.

~~~
Sebb767
Well, you surely could do this via some proxy setup. But in that case you
could also simply rely on LetsEncrypt directly; them going away is as likely
as your average hosting provider.

Now, of course that's an item on the maintenance list if you're hosting the
web page yourself, but as mentioned above: You still need regular updates,
having certificate renewal on that list doesn't make any noticeable
difference.

------
mmmeff
Everyone mentioning their quick and scrappy solutions to setting up TLS
termination is completely missing the spirit of the author’s letter. Did y’all
even read it?

~~~
user5994461
You mean the point that "you could have set up a web server in 2000 and it
could still be running today, working perfectly well"

Try to setup a windows 98 machine as a web server and see how it goes (Windows
XP only came out in 2002). It doesn't do HTTP/1.1 so it's good question
whether clients would be able to connect at all.

~~~
morganvachon
> _Try to setup a windows 98 machine as a web server and see how it goes_

Windows 98 was a client OS, it was never intended for this. A proper test
would be Windows NT 4.0 Server. Or more realistically for the time frame,
Solaris 7 or 8.

~~~
snowwrestler
Any web server software set up in 2000 and still running today unpatched could
be pwned quickly. Including and especially NT 4.0.

When you're running a server, TLS is not the only service you are dependent
on; you're also dependent on security updates. If they're not backported,
you'll need to make version upgrades.

Arguably it's better to think of every piece of software running on a server
like it's a service, to maintain security and performance in an Internet full
of hostile actors.

I don't even understand the mindset of wanting to run a live server on ancient
software, aside from doing it as a hobby. In which case, it's pretty typical
for such hobbies to require extra work and care. Driving around in a car from
1940s takes more maintenance work, and is less safe, than a modern car, and
everyone gets that.

~~~
morganvachon
I think you misunderstood my comment, I'm not advocating doing this. I was
simply correcting the person above me who appears to be under the impression
that web servers in the 90s ran on a client OS.

------
princekolt
I agree with the author's points entirely, but I'm going to be the devil's
advocate here and argue that this is fine.

Sure, old websites will stop working, but just as we shut down our FM radio
and analog TV to make room for new amazing technology, and therefore rendered
several generations of radio and TV devices useless, the internet will need to
learn to live with the fact we can't possibly support its entire spectrum of
functionality forever.

However, I see this as a tooling and ergonomics issue. Despite what people say
about certbot, it is still not 100% trivial to use it. If your stack differs
slightly from the canonical HTTPS server, you're gonna have issues. I'm not
saying these problems are inherent to certbot, just that there is room for
improvement.

Once we have normalized how certificates are signed and verified, and use
encryption methods that can easily be tuned for increased computing power, and
more importantly, once HTTP 1.0 is no longer an expectation, I believe the
amount of manual work needed to maintain a server will go down again.

We're at a transition phase and we all know these are never easy (except if
you're Apple and you're transitioning CPUs architectures, apparently).

~~~
generalpass
Indeed. As I understand it, *BSD users are currently left out of wildcard
certs from Let's Encrypt. HN runs behind an nginx proxy because the forum
software doesn't support HTTPS.

~~~
hedora
There are a few certbot clients for openbsd, at least. I’m using one that was
written in bash with httpd, and I think they added a new one to the base
image. Using the script was a bit annoying because I had to install bash from
ports.

As on my Linux and Synology machines, the bash script has broken once or twice
because it’s against modern best practices to define and stick with stable
protocols.

Sometimes I think it would be easier to rewind to the 90’s web and build a
sane ecosystem on top of that. Every aspect of the web that I interact with
has gotten worse in the last decade.

(Also, HN’s is using a non-wildcard cert from digicert, at least according to
my web browser).

~~~
generalpass
I was responding to OP regarding non-trivial, which, I dare say, your reply
agrees with. I mean, geez, you've resorted to Bash in OpenBSD.

The Arc Language app that this forum runs on doesn't support HTTPS, so
wherever the cert comes from, accessing the site is through a reverse proxy.

~~~
hedora
Well, that was before they added acme-client, which works out of the box, more
or less:

[https://man.openbsd.org/acme-client.1](https://man.openbsd.org/acme-client.1)

I chose the bash client over the python client because I didn’t want to use
pip to pull unvetted software from randos on the Internet. (Also, I hate
python, and eliminating use cases for it makes me happy.)

Anyway, the bash script is still quite popular, and is extremely reliable.
Since I used it, they eliminated the bash dependency, and now it is unix shell
compliant:

[https://github.com/acmesh-official/acme.sh](https://github.com/acmesh-
official/acme.sh)

I would definitely use it in the future if I was configuring a Linux box to
renew certificates.

The breakages I encountered were all on the Let’s Encrypt side, and broke all
the clients, including the official one.

Anyway, it has been trivial to use Let’s Encrypt with OpenBSD for many years.

~~~
generalpass
In my searching, I had not come acress acme.sh and I am not aware of using
acme-client for wild card certs.

------
upofadown
The root problem here seems to be that web browsers (and to some extent
servers) have become quite insecure in practice due to the insane complexity
of all the different protocols they now support. If you are using Mosaic to
access ancient web sites the most an attacker can do is to change the content.

Now remote attackers have a simply vast attack surface. They can actually run
code on your computer. History has shown such code can never be perfectly
sandboxed. So you need to be very sure you are where you think you are and
that you are only getting content from that place. TLS is an essential band
aid fix.

The same sort of thing comes up anywhere HTML is used. People sometimes claim
that email is inherently insecure. In most cases it turns out that html email
is inherently insecure.

It seems that any popular protocol eventually gets extended to the point where
it is impossible to keep it secure. It also seems that the popularity of the
protocol causes a period of denial.

~~~
ipython
Let’s be honest: remote attackers could run remote code on your computer
running mosaic _a lot easier_ than today with modern chrome.

RCE in mosaic or Netscape was probably trivial, it just wasn’t as well studied
as today. Just look at how many times those browsers would crash in the normal
course of web browsing, even back then. Nobody was intensely fuzzing their
html parsing engines, image processing libraries, scripting elements, plugins
etc.

Heck, IE used to load and run arbitrary code as “activex” controls. No fancy
exploit needed!

~~~
anthk
Opera not much, it was pretty decent and full of features. But nearly all
decent "web" software required IE with Active X thru an OCX. Also there was a
Netscape plugin I think.

------
at_a_remove
I agree with this wholeheartedly.

I had to maintain a lot of old custom appliances and the like for which no
ready replacement existed. Niche stuff where you had one or two vendors in the
whole arena. If they were running on Apache, it was hidden somewhere with a
big "WARRANTY NULL AND VOID" sticker. For all of these custom jobs, HTTPS was
an afterthought, if it was present at all. For these, I had to generate cert
requests through a menu system, load up certs through a menu system, and the
like. I couldn't just CertBot it. That's never going to happen.

Frankly, the latest cult fad ("all must be encrypted, heretic!") has not made
my life easier nor brought any benefit to my users. Nobody was going to MitM
for this stuff. It's just another tiresome fashion like the people who
relentlessly put W3C validation boxes at the bottom of their webpages, only
now it is getting shoved through by the same browser vendors who wonder if us
mere mortals need to see URLs at all.

~~~
kerkeslager
So, I'm definitely not against vanilla HTTP _where appropriate_ , but here are
three arguments for the "all must be encrypted, heretic!" side of things:

1\. The vast majority of web developers/IT people/etc. aren't really
knowledgeable enough to make the call whether something actually needs to be
encrypted. Very smart developers have told me they weren't worried about
security because their application didn't hold any sensitive data. Sometimes
this was because they thought emails and/or passwords weren't sensitive data.
I've also had very smart people tell me they weren't worried about security
because nobody had any incentive to hack their server. But there is _always_
incentive for bad people to hack a server: even if they can't ransomware you,
they can host botnets or child pornography. Having seen situations like this
enough, I'm no longer even convinced that even I am correct when I think
security isn't necessary. Maybe you're qualified to make that call, but even
then, evaluating the system may be harder than just adding security.[1]

2\. Just because your system doesn't need HTTPS now, doesn't mean it never
will. YAGNI conveniently doesn't specify whether "A" stands for "Are" or
"Aren't". In my experience, most systems involving HTTP _do_ eventually have a
requirement that passes sensitive data over HTTP, and it would be all-too-easy
to forget that channel wasn't encrypted.

3\. Even if you don't need encryption for yourself, do it for everyone else
who does need encryption. Before HTTPS was required in every browser, users
would happily submit their passwords over completely unencrypted channels.
Now, browsers will give big warnings that hopefully prevent users from doing
this. Those warnings probably aren't necessary if, for example, you're
submitting an anonymous comment to a Disqus conversation. But it's important
not to normalize clicking past those warnings for nontechnical users, because
if they click past those warnings to post an anonymoust Disqus comment,
they'll click past them to submit a password. And in a more political sense,
normalizing encryption for stuff where it isn't important protects encryption
for stuff where it is. "If you are encrypting you must be hiding something
bad" is a dangerous narrative which is pervasive in politics, and default-on
encryption is one of the best tools against that.

[1] As an example of this, it appears that there are a few attack vectors in
your own posts, which you either didn't think of or don't care about, but
which I would very much care about as a user of your library catalog.

~~~
at_a_remove
I've heard these arguments. I am not unsympathetic.

For the first, I don't know what to tell you. If they aren't qualified to know
when security must exist, they are not likely qualified to implement it.

For the second, when it is needed, it can be added. A staff directory with no
interaction does not need it now. It may never need it.

As to the third, in a story, here is a great argument _against_ that:

So years ago I ran this dumb little site, by request, for the users to submit
stuff, nothing too special. They suddenly wanted the HTTPS, which was at that
time outside of their budget, which was zero, and we would have had to move it
off of a shared IP (this was some time ago). I told them it wasn't necessary
for their use case.

"But the LOCK thing makes it safe!" They were so insistent ... but they still
demanded the results be sent to them over SMTP. Oh, how I facepalmed. I
explained that SMTP was its own channel. No, no, they insisted. The _LOCK_.

HTTPS can result in a false sense of security.

At the end of the day, I think I would rather have choices available even if
mistakes could be made. Removing choice from people for their own good is
fantastic if and only if it will never ever cause any kind of issue on its
own. Here, in the article, we discuss some of those issues. If HTTPS were
possible to just implement retroactively with nothing ever breaking, not even
some ancient software on a floppy disk, fine. For anything other, it's a
tradeoff, and I believe for tradeoffs we ought to have the ability to assess
the tradeoffs and make a choice. Even choices others do not agree with!

~~~
kerkeslager
> For the first, I don't know what to tell you. If they aren't qualified to
> know when security must exist, they are not likely qualified to implement
> it.

...which is why numerous high-quality implementations already exist.

There isn't a way to say this without risking offending you, but if you can
take it for what it is, I think you would benefit from it: you have posted a
few examples in this thread where you think no one would care about
encryption, and there are people posting here saying they do care about
encryption in those cases. It might behoove you to approach this with a bit of
humility and acknowledge that you are one of the people who isn't qualified to
know when people need/want security, and your opinions are borne from that
fact.

~~~
Karrot_Kream
This feels like a no-true-scotsman argument. A true security-minded person
understands the benefits of TLS, and if you don't, then you're not qualified
to understand security.

~~~
kerkeslager
I'd view it as more tautological than that. "If you don't understand security
then you don't understand security" isn't really controversial.

~~~
at_a_remove
"If you have not yet attained Buddhahood, you cannot understand the value of
enlightenment." is also a non-controversial tautology, but it also doesn't
leave any room for anyone to do anything else.

~~~
kerkeslager
If you're being paid to secure a system, there isn't room for you to not
secure the system.

~~~
at_a_remove
The most secure system, naturally, being one that is turned off with its
network connection desoldered.

At last, nirvana!

~~~
kerkeslager
So clever! But, obviously we're talking about making a good-faith effort to
secure servers while allowing them to provide their intended functions.

Refusing to implement HTTPS because you don't know how to use Certbot isn't
that.

~~~
at_a_remove
As stated, Certbot wouldn't work in the case I mentioned.

~~~
kerkeslager
Based on your description, it would work just fine, you're just prevented by
contract from using it.

------
timw4mail
On the other side, it is now almost impossible to use the internet on a retro
computer.

I absolutely understand the drive to use more secure TLS, but it's a really
sad experience to see that you can load Google, but have nearly every link
unable to load.

It wasn't that long ago that I could try out some ancient browser on some
modern website, and at least see it fail to render well. Now you can't even
connect.

This leads to interesting situations with, say, PowerPC Mac software sites.
Either they have http connections, or they use user-agent sniffing to
downgrade TLS.

~~~
mrunkel
> This leads to interesting situations with, say, PowerPC Mac software sites.
> Either they have http connections, or they use user-agent sniffing to
> downgrade TLS

That doesn't work. The user-agent is passed after the encryption channel is
established.

What they do is offer all the options, and the client picks the best one they
can support.

However, why would this need to be HTTPS to begin with? If you're not
transmitting anything private, HTTP works just fine.

------
notRobot
Ugh. I have a bunch of "projects" of mine on the internet, many of which run
the most minimal (I'm talkin [0]) http servers on embedded hardware and stuff
like arduino where HTTPS just _isn 't_ possible.

And the only I keep them online is because they don't require _any_ upkeep
whatsoever, and don't rely on external services for certs or whatever, and
because the occasional web browser stumbles upon them and finds them useful. I
really don't want to actually put work into them though.

[0] [https://stackoverflow.com/questions/16640054/minimal-web-
ser...](https://stackoverflow.com/questions/16640054/minimal-web-server-using-
netcat)

~~~
WilliamEdward
Users are all about privacy and security until they realise they have to
actually maintain their websites to achieve it, then it becomes an unnecessary
chore.

------
bartread
> Another, more relevant side of this is that it's not going to be possible
> for people with web servers to just let them sit.

This problem goes far beyond just HTTPS, and here's a small example. I have a
side-project, which is a website where I post versions of old arcade games
I've created. Now I can go months without deploying any changes but despite
that things just keep breaking from time to time.

Without digging through my GitHub issues here are three examples:

\- Web audio support at different times in both mobile and desktop browsers
(now fixed)

\- Motion control support via accelerometer (now fixed)

\- Full screen support (still not fixed, although bizarrely works fine on iOS
which has no explicit full screen support but will enter full screen
automatically when you reorient your device)

The fact that you can't simply deploy code and have any realistic expectation
that it will continue to run without supervision or tweaking for even a few
months is INCREDIBLY ANNOYING.

People need to stop tweaking APIs and the behaviour backing them, and breaking
compatibility all the time.

~~~
gambler
This is, in reality, the biggest problem. A lot of modern technologies have
built-in assumptions that most people promoting them didn't ever think
through. "Everything will be updated all the time, everywhere" is one of those
assumptions. "Security" people who yammer about MitM attacks don't give the
slightest fuck about that other side of the coin, because that's not what
they're paid for. However, a lot of these assumptions have costs that will
compound over time.

What is the cost of constantly loosing old media, for example? What is the
cost of perfectly good consumer devices being rendered unusable via
infrastructure updates? What is the cost of constantly rising complexity? This
is not discussed with any level of honesty.

------
mrunkel
Why does the old web server even need HTTPS? Have I missed a memo that HTTP is
going away?

If you're not sending anything private (ie just the contents of the website
are being delivered) there isn't really any need for HTTPS, so leave that old
HTTPD server running HTTP.

But, if you want your stuff encrypted, you can't rely on old versions of HTTPS
so you have to use something more modern.

Also, if you want your transmissions to be secure, you probably shouldn't be
relying on hardware/software that has passed EoL ages ago too.

If you want to read more, you can check out my page at
wais://runkel.org/personal?old-servers

~~~
msla
Tell that to ISPs who insert ads into every non-encrypted location they can
find.

~~~
vultour
Pretty sure that’s mostly a US problem, I haven’t heard of any other first
world country where this regularly happens. You should probably work on fixing
the problem instead of requiring everyone to accommodate you.

~~~
msla
You're wrong, naturally, and the fact HTTPS solves the general problem is
proof enough it's needed.

------
rodolphoarruda
> Even with automated renewals, Let's Encrypt has changed their protocol once
> already, deprecating old clients and thus old configurations, and will
> probably do that again someday.

Just another hurdle to our effort to make things last on the Internet [1]

[1]
[https://jeffhuang.com/designed_to_last/](https://jeffhuang.com/designed_to_last/)

------
poisonborz
But what is "lost"? All the user gets is a warning that this site is not
secure. Which is true. As long as browsers don't block these sites, or even
until there is a maintained browser that can, this is just a necessary
security feature.

~~~
pmlnr
SSLv3 is deprecated and modern clients don't allow to use it any more. Old
HTTPS servers, that predates TLS are already inaccessible.

~~~
Avamander
Your point being?

------
shrikant
Somewhat tangential: I maintain an internal webapp that runs PHP on IIS (I
know, I know...). Does anyone know how I can add a legit SSL certificate to a
setup like this? Everyone currently accesses it using a
[http://<friendlymachinename>/](http://<friendlymachinename>/) URL, and I've
somehow been unable to find information on how to HTTPS-enable this app!

~~~
ccmcarey
As its internal, you'll need to look into setting up a PKI. You'll need to
create a certificate authority that has its public cert installed on every
host accessing the internal site, and then you use that CA to sign the
certificate for the internal site.

~~~
shrikant
This sounds like it could work. Since it's a workplace server and is only
accessed internally by workplace-issued machines (which are always on the
workplace VPN), I think there might be some sort of public cert already in
place. Guess I just have to hunt down a certificate!

~~~
JoshTriplett
It's still preferable to use a real DNS name and a real certificate, if you
possibly can, which means that nobody has to install a CA.

There's currently no way on any modern system, as far as I'm aware, to install
a CA but limit it to only a specific domain and its subdomains; any CA you
install can issue a certificate for any domain. (There are theoretically some
certificate extensions that allow limiting a certificate to subdomains, but as
far as I can tell, implementations don't reliably support those extensions.)

------
tilolebo
But if you really wanna run a webserver for the next 20 years without
maintaining it, isn't HTTPS the least of your problems?

What about system and software updates to fix critical vulnerabilities?
Hardware failures? I feel these things would take much more maintenance time
than the occasional certificate renewal.

------
preommr
Maybe we could use a doctype that allows for plain websites (no js, no forms,
no inputs) to be accessed without a warning from the browser when not using
https.

~~~
nicbou
That would still open you up to MITM attacks

~~~
marcosdumay
MITM attacks aren't much of a problem if you are just displaying some static
HTML from a random blog, can't show anything else, and the user has a large
confidence his browsing history won't be used against him by any large
internet party (any telecom, government, your hotel, etc, even the ones not on
the obvious path).

I guess that last one is a pretty big dealbreaker, but I can understand
somebody saying "it's just a site full of old jokes, I don't care about it".

~~~
discreditable
> displaying some static HTML from a random blog

Hard ask given how many don't work with JS disabled.

------
superkuh
There are two mutually exclusive views of the web. As a set of protocols set
to allow individual humans to share information about things they love and the
web as a set of protocols to make a living. Since the corps took control of
the web by ignoring the w3c and forming corp controlled whatwg all browser and
html spec decisions have been corporate needs motivated and these security
needs have been killing the free web by requiring everyone do all the things
money transacting corporations need to do.

But, you don't have to use a mainstream browser and you don't have to stop
serving HTTP. It's a personal choice. The two webs can still exist alongside
each other if you chose to not follow the money.

~~~
whatshisface
You may be forgetting that corporations and world governments find it a lot
easier to track you on one than the other...

It is also silly to associate HTTP with the "personal web" when LetsEncrypt
exists.

~~~
superkuh
A long leash is still a leash. If a third party's authority is required to
host a visitable website it will eventually cause major problems. And by third
party I mean even currently-benevolent corporations like LetsEncrypt. Just
look at what happened with dot org.

------
OliverJones
Lost forever is the trust internet users shared twenty years ago. We didn't
worry about exploits and security updates because we didn't have to. Even if
somebody cracked a server, it was to learn and for the lulz.

Now that the global internet is overrun with cybercreeps we have to use TLS
and release and deploy security patches regularly.

If the early net protocols had been designed security-first, we wouldn't have
this situation. But, if they had been designed security-first, the internet
wouldn't have taken off. TCP/IP would still be competing with MAP/TOP and the
whole x.whatever stack.

I lament the loss of innocence.

~~~
dredmorbius
Was it trust, or was it misguided obliviousness?

------
ThreeFx
I think this post misses the point of HTTPS completely. It makes HTTPS sound
like some new technology, even though the HTTP protocol is still used, the
difference being that the connection is encrypted and integrity protected.

And that said integrity protection is what is important in today's world:
advertising is being injected into HTTP content by third parties which is a
very bad thing IMO.

If updating your servers once-in-a-while is too much for you, maybe running a
webserver isn't for you at all - there are certainly enough hosting providers
which handle that for you.

~~~
hedora
The point of the article is that the web ecosystem is breaking compatibility
with “old” servers every six months or so these days.

First certbot broke. Now, older tls. Next, they’ll probably deprecate tls 1.2.

I imagine http 1 or 2 will be on the chopping block in the next few years.

They can’t kill both http 1 and http 2 because http 3 needs to use at least
one of them as a signaling protocol to establish connections. I’m guessing
that means there will be an http 4 that breaks with http 3, but doesn’t need a
separate session establishment protocol. (Http 3 partially breaks TCP, and
replaces it with userspace stuff. Http 4 would complete that process.)

The web was a success precisely because it was simple and stable enough to let
everyone escape from the old hosting providers. They heavily censored content,
charged high fees and so on.

Now, hosting is unnecessarily complicated, and the providers are consolidating
back into a handful of players like facebook YouTube and cloudflare.
Censorship and mass surveillance are on the rise. Since more people use
computers than before, these players are helping kill the free press.

Telling people that operating a static web page should be a full time job,
because it’s (been made) really hard is just accelerating these trends.

~~~
ThreeFx
I completely agree on the breaking issues, it is a _lot_ of work to maintain a
website these days. In 99% of the cases I'd also agree with you that there
should be _no_ reason ever to break e.g. a library used or something.

But the thing about certbot and TLS is that these things are security-
relevant, and IMO security one of the only (if not the only) reason to break
compatibility these days. TLS ciphers break, heck even the TLS protocol itself
may reveal flaws later (see POODLE, BEAST, FREAK, etc.). That's why SSLv2,
SSLv3 and TLS1.0 are deprecated: Not because there is a better protocol out
there, but because the protocols are inherently insecure. (There is no huge
flaw in TLS1.1 I am aware of, it uses MD5 and SHA1 under the hood for master
secret derivation, but that's about the only thing).

I'm with you that maintenance is a big chunk of work, but that's IMO a price
you pay for being in control. I've got a few services running myself, and
honestly I forget about the boxes after setup and enabling auto-upgrades, so
it isn't too breaking, at least for me.

------
Deukhoofd
While I personally agree with making the entire web run on https, I can
definitely see the argument that it requires more monitoring and maintenance,
especially when your certificate provider changes their protocol.

I wonder if there can be a future drive for a system where this is not
required anymore, and where we can create a decently standardized form of
encryption of the same or better quality as TLS, without being dependent on
external services.

------
ken
Someday, you may not be able to connect to IPv4 systems. I’ve already lived
through physical layer obsoletions, like 10b2. Time marches ever onward.

If you want to preserve an old webpage, you should stick it behind a reverse
proxy or something so the content can still be accessible as the other layers
get upgraded. You probably don’t want to run a 1990’s OS on the open internet
today, anyway.

------
ComSubVie
My dad is using an old Android Tablet to surf the web. Unfortunately there
hasn't been an update available for some years.

This year it stopped displaying some websites because of a newer HTTPS
algorithm which isn't supported anymore by the tablet. And that for watching
the weather forecast and similar things that really don't need to be forced to
HTTPS.

~~~
Wowfunhappy
Can that Android tablet run the mobile version of Firefox by any chance? At
least on desktop, Firefox doesn't rely on the OS for https, which is helpful
on older systems. I imagine mobile would be similar.

~~~
ComSubVie
I tried opera because I couldn't find firefox in the store. Opera worked about
a month longer than the default browser, but doesn't work anymore.

------
pornel
On the software side, we're getting HTTPS churn now, because we've only
relatively recently started taking it seriously, after over a decade of
ignoring it. TLS 1.0 is from 1999!

Hopefully, we'll eventually figure out how to make a TLS stack without endless
buffer overflows and padding oracles, so it won't be necessary to upgrade
servers all the time.

------
saagarjha
My website has been served over HTTPS for the last couple of years. Of course,
it's nice to get the padlock and all the other benefits of HTTPS, but I used
to delight in demonstrating it working on older systems going back all the way
to the original WorldWideWeb.app, and was a bit disappointed that I could no
longer do so easily…

------
Ericson2314
The type of static sites that live in posterity should just be on IPFS or
similar. HTTPS makes better sense for more dynamic/interactive things where
the server will need regular maintenance anyways.

Recognize those are two different use-cases, and the problem goes away.

------
Florin_Andrei
> _In the era of HTTP, you could have set up a web server in 2000 and it could
> still be running today, working perfectly well (even if it didn 't support
> the very latest shiny thing)._

Not true in the least. It could not be further removed from reality. This is
silly.

People forget how terrible security was back then (not that it's orders of
magnitude better now). You had to stay on top of security patches all the
time. When a software update was not doable, perhaps there was a config
workaround.

No, the admin work was just as frenetic back then as it is now. Perhaps the
stakes were even higher (root shell anyone?).

------
crazypython
I wish HTTPS origins were allowed to define a set of certificates to trust.
Imagine a game architecture where each game server has its own self-signed
certificate. For example, imagine you download a game client via HTTPS, and
the game client downloads a list of official servers from the same origin. I
want the HTTPS origin to be able to say "for this webpage load, trust these
certificates, because they originated from an HTTPS origin."

Instead of making HTTPS hierarchical through a certificate chain, this let's
it be dynamic (script-controlled) and sideways.

Thoughts?

------
cft
Apple unilaterally just reduced the maximum validity of HTTPS certificates in
Safari to 15 months from 27 months. Not too long ago it was over 5 years. All
this bit by bit increases the burden on the smaller independent non-commercial
webmasters, who in the end just give up and leads to further centralization of
the web: forces them to platforms, who take care of your "safety" ( not just
from MITM but usually also from "violating" content).

~~~
Avamander
> who in the end just give up and leads to further centralization of the web:
> forces them to platforms, who take care of your "safety"

What's stopping those decentralized platforms to utilize modern certificate
refresh?

You're also totally ignoring that the webmaster still has to do security
updates which are probably a much bigger maintenance burden.

~~~
cft
I ran an Apache site with 10000+ daily visitors for 10 years on my own SuSE
box from home. Firewalled the box and only left ports 80, 443, 25 and 22 open.
Only had to update recently when I replaced the box with Raspberry Pi (I
wonder how long the SD card will last).

------
nonamenoslogan
After more than 20 years of running my own webservers on everything from old
Sparcstations to Debian VPSs, I took a few year hiatus from having a big web
presence and pointed my domain at a simple github.io page. It worked so well I
decided to go Github completely for my simple blog/pictures/projects web page,
I’ve gone almost full circle back to the original static HTML style.

------
kirstenbirgit
In general, anything you connect to the Internet (or perhaps even a network
used by Internet-connected users) needs to be updated regularly.

That a HTTP server from 2003 continues to work today is happy coincidence, but
it also means that there's probably a working 0day for that server software,
so that, if you can't update it, you would have to mitigate it another way.

------
nunez
It sucks that we are going to effectively bury the "old Web" (well, Google
will by ranking websites frontend over HTTPS over HTTP-only websites), but it
is also really nice that people can't see what I'm doing over the wire and
that I get big fat red warnings when the certificate I think I'm using isn't
what I think it is.

------
welly
Isn't the more interesting part of old web servers what it actually contains
in the form of content rather than the technology?

------
netheril96
So many people advocating against HTTPS in this page.

A lot of China developers had the same voice. They have been silent since
2015, when China ISPs and even some routers started injecting ads into plain
HTTP sites. Many websites quickly adopted HTTPS for all of their pages ever
since.

------
waltpad
...and old web navigators. Protocols evolve, security requires old encryption
schemes to go away, thus the OS you use has to keep up as well, and if you try
to use a reasonably old live bootable DVD, you might have some surprise.

------
carapace
A) Just use Caddy server.

B) Host your content on IPFS.

You shouldn't need to run your own server in 2020. (Okay maybe a pinning
server or service, but that's a small constant plus storage costs. Cost of
enduring data should approach zero.)

------
gpvos
You could run djb's publicfile, which serves only static files.

------
aabbcc1241
What if we use content hash or provider public key hash as the url? Like ipfs
/ zeronet.

DNS server itself is also a service you've to rely on for https to work in
practice.

------
est
The solution: use PASSIV enabled FTP servers. It works in most browsers (for
now)

~~~
swiley
I’m pretty sure ftp only works on browsers with patches/flags to enable it
that are added by district maintainers. I remember most major browsers
dropping it in official builds.

------
tonetheman
I love this article.

All this switch to HTTPS just so Google can guarantee its ads are being seen.
You get some other benefits as a side effect... but that is the real reason
HTTPS is everywhere.

Ah well.

~~~
dmarlow
Can you elaborate on this?

------
White_Wolf
30 min(for a noob like me) to fix: Pfsense + HAproxy + certbot enabled you can
serve old websites all day long

~~~
unilynx
But are those available for "an old SGI Irix workstation or even a DEC Ultrix
machine" ?

~~~
pjc50
I believe the suggestion is that you proxy them via another system.

~~~
shiftpgdn
Which will if you listen to the advice of HN will be an ec2 micro instance.
Which means you're handing more of the internet over to "the big guys" exactly
as the author of the article points out.

~~~
MegaThorx
I would say depending on the traffic a raspberry pi should suffice?

~~~
White_Wolf
Good point. That would be more than enough for low traffic.

------
WilliamEdward
I like that HTTPS forces you to actually care about improving and maintaining
your website. Somehow, leaving your site alone forever to gather dust is
considered a good thing. The author totally missed that one.

Yes, some sites serve basic content that can stay up forever without change
(thistothat.com comes to mind as a good example) but it really doesn't hurt to
force people to put a little bit more care into their sites.

~~~
nicbou
Books get written once. Why can't websites? We still learn from documents that
are centuries old, and when a cache of them is found, it makes the news.

Yet on the other side, we let so much information expire because the
underlying technology is obsolete.

~~~
ozim
Books are not going to turn into a botnet node. Books hardware is not getting
less efficient when new paper is produced. Electronics will not live centuries
unless you spend loads of money to maintain it.

