
Safari will no longer trust certs valid for more than 13 months - nimbius
https://www.theregister.co.uk/2020/02/20/apple_shorter_cert_lifetime/
======
superkuh
There are two mutually exclusive views of the web. As a set of protocols to
allow individual humans to share information about things they love and the
web as a set of protocols to make a living.

There are real reasons for the for-profit web to want to limited cert
lifetimes since revocation doesn't really work in practice. In terms of
browser dev the two views are mutually exclusive and the one that funds the
coders gets it's way. Especially with the W3C marginalized and a mix of
corporations and corporate-centric standards groups running things now. Expect
to eventually be unable to host a visitable or indexable website without
relying on at least one third party service in the near future.

~~~
bad_user
With Let's Encrypt it's cheaper than ever to host a personal website over
HTTPS with a certificate that updates itself.

Due to Let's Encrypt, free hosting services like Netlify or GitHub Pages are
now providing HTTPS certificates and installing it on your own server is
pretty painless, if you're into managing your own server. And if your hosting
provider doesn't support Let's Encrypt, you can always put Cloudflare in front
of it.

So I don't really understand what you're talking about, when the price and
ongoing maintenance of HTTPS certificates has gone significantly down for
hobbyists.

~~~
xg15
> _Let 's Encrypt [...] Netlify [...] GitHub Pages [...] your hosting provider
> [...] Cloudflare_

Those are exactly the kind of third-party services the GP was talking about.

> _if you 're into managing your own server_

One of the core advantages of having a personal server has always been that
you can keep it entirely off the internet and run it without _any_ involvement
of third-party services. That is not anymore possible. Like, I now need to
purchase a domain name and set up infrastructure to fulfill Let's Encrypt's
challanges, just so I can serve a page on my LAN.

~~~
extra88
You can use HTTP instead of HTTPS or you can use a self-signed cert.

~~~
xg15
Often, this doesn't even work for small projects:

\- HTTP have no access to a significant part of web APIs added in the last
years - and they will be blocked from _all_ APIs that are being added in the
future.

\- Self-signed certs show security warnings that are deliberately confusion
and discouraging to click through and will likely become even more so in the
future. Show those to other people is no option.

~~~
Lyrex
In my opinion this doesn't fall under the "it's only on my LAN and a super
small project" category. If you LAN is a company then you should be able to
deploy a custom CA to your clients and sign your certs. If it's only your
small side project you personally work on, then just trusting the cert locally
works out too. If people don't want to use third party providers, they have to
do some of the work on their own. That's nothing new (at least to me).

~~~
xg15
A small, personal project does not mean that the developer is the only person
that uses it. Servers can also be used by friends, family, roommates, etc for
which installing and managing custom CAs is a hassle.

I do agree that using Self-signed certs and clicking through security warnings
is possible - however, is being made deliberately tedious (e.g. Chrome will
forget that you accepted the cert after a while). It also seems to me that
this part is actively discouraged by browser vendors, so I'm honestly not sure
how long it will stay open.

Self-signed certificates are also unpredictable to do API requests to because
no accept UI is shown for such requests.

> _That 's nothing new (at least to me)._

It absolutely is. With HTTP, you could simply run a local web server and have
everything interested point their browser towards it - and everything worked.
This is not possible anymore unless you want to make recurring payments for a
domain and accept that you need an internet connection.

~~~
dublin
And this is exactly why SSL everywhere is a really, really, bad idea. (Plus
the problems of IoT server certs, mentioned above...)

~~~
xg15
I understand the rationale behind https-everywhere and I believe it's
absolutely necessary for the web at large. The problem of network attackers is
certainly real.

However, a side-effect (intentional or not) is that the web is turned into a
sort of app store: Either you belong to the platform or you don't, and whether
or not you do is decided by third parties. (Who, btw, are not even bound by
any kind of public mandate - they are simply private, profit-driven companies)

I also don't think the stated security advantages always make sense: Let's
Encrypt will serve network attackers just as easily as legitimate customers.
Meanwhile it will lead to a lot of stuff being exposed on the internet than
would be necessary otherwise. We also force devices that simply should expose
a local web interface to have a cloud service. I don't see how this makes
anything more secure.

I guess what I'd want is simply a way to designate a device as "trusted"
locally, without depending on third-party services, internet connectivity or
anything else and without anything expiring. A way that should be encouraged
to be used by non-techical users as well.

------
j1elo
Anybody knows why Safari on iOS will reject using self-signed certificates for
WebSocket connections? (doesn't happen on desktop)

This is a major annoyance since it is the _only_ browser that does this and
won't work with "wss://" URIs even after accepting the mandatory certificate
exception. Accessing the page rightfully shows a warning on all browsers, to
which the user can click on "Continue" or similar, to ignore this and start
the demo, except if they try with iOS (this is for simple one-off tutorials or
demos [1], so maintaining good certs _or_ requesting users to install a custom
root CA in their devices is out of the question).

I know, this seems like a StackOverflow question... but the only 100% relevant
one I found [2] didn't get much love, so I thought maybe someone at HN knows
more about this

[1]: [https://github.com/Kurento/kurento-demos-
js/tree/master/play...](https://github.com/Kurento/kurento-demos-
js/tree/master/player2many)

[2]: [https://stackoverflow.com/questions/36741972/using-a-self-
si...](https://stackoverflow.com/questions/36741972/using-a-self-signed-
certificate-with-safari-and-websockets-osx-ios)

~~~
castillar76
Apple recently (last fall) introduced a change that rejects all certificates
that are valid for more than two years, whether they're public or private. I
would double-check whether the self-signed certificates are valid for longer
(many default self-sign processes make ten-year certs, for instance). Failing
that, you may have to create a proper private PKI and get users to install the
CA, or else use something like Let's Encrypt for certs.

~~~
j1elo
Thanks! you gave me one thing to check. The certs are indeed made for 10 years
validity, the idea being that they should just make the demo work for a long
time, even if it is with a certificate warning.

I don't think it's worth the effort of setting up all the automatic renewal
process with Let's Encrypt for what amounts to a quick static code example...
so in principle I wanted to avoid having to set that up.

The contents of _this_ article means that soon this 2-year maximum will become
a 13-month maximum, right?

Is there any way to create these specific-purpose certs with some kind of
"private use" or "development use" flag that allows not having to re-create
them so soon? 13 months looks too short of a time period, I'm afraid nobody
will remember to refresh the fake certs and the demos will break... for really
no good reason (related to the code itself)

~~~
castillar76
No, Apple was clear that this change (the 13-month limit) only affects public
certificates, so internal private PKIs and self-signed certificates retain the
two-year limit from before. I _think_ that that two-year limit extends to
every certificate used for TLS in the system, so there's no way to get around
it by manually trusting the certificate or CA chain (there's no 'development
use' flag on certs), but that bears testing.

------
wbond
Apple is the reason I just switched a handful of certificates from Let’s
Encrypt to basic 2 year DV.

To support Apple Pay on the web, you have to go to your developer account on
the Apple website, and under certificates generate a custom ASN.1
authentication file and upload it to the `/.well-known/` folder on your
domain. Once uploaded, you have to click a “Verify” link, to check the file
and mark the domain as Apple Pay approved. The issue is the verification only
lasts as long as the certificate expiration, and if a new certificate is
installed, you have to re-verify the domain with a newly generated
authentication file, _for each domain_. A fresh certificate with an old
authentication file does not work.

This means if using Let’s Encrypt, you have to manually step through the
verification process for each of your production, staging and development
environments every three months. There doesn’t appear to be any automated way
to handle this.

After two cycles of this I opted to purchase 2-year certificates just to save
the hassle of re-authenticating my web environments on a rolling basis by
hand.

This announcement just means more frequent manual processes once again. What a
pain.

~~~
cyphar
There is some argument for this kind of certificate pinning (though I'm
honestly not sold on the idea), but I think that this example is a further
argument for scriptable certificate renewal. Most ACME clients allow you to
run scripts after the certificate is renewed, so you could (in principle)
trigger a script that does this Apple-specific verification process for you
(maybe you could even trigger the "verify" button click by messing around with
cURL -- though it'd be pretty annoying for there to be no API for this
process).

By manually getting 2-year certificates you're setting yourself up to forget
part of the renewal process. This was the main argument behind Let's Encrypt
having such short expiration windows -- it encourages people to script their
entire deployments.

~~~
wbond
I think this is an argument that while a certain group of server
administrators and security professionals love ACME, it still has a number of
kinks to work out.

In my case I’m not really setting myself up to forget about the renewal
process as I have a script to generate the key and CSR and an ansible playbook
to update the cert once sent to me. Certificate vendors are more than happy to
email you to remind you of an upcoming renewal, in just the same way Let’s
Encrypt does.

On a side note, trying to script out the proper setup/migration of Let’s
Encrypt is WAY more involved and fraught with mistakes than a simple
certificate upload. The failure case is that the initial certificate issuance
succeeds but since the initial setup needs to happen before SSL is configured
and working with Nginx, you can’t use the same config before and after the
initial setup. Thus you need to have two separate Nginx configs and switch
them, or you have to use standalone for the initial issuance and webroot for
renewals. Both of these are far easier to mess up than uploading a
certificate.

I’ve set up more than 10 different servers with Let’s Encrypt and I don’t
think a single one has just worked. I think in every case something got messed
up along the way, and you only find out about it 70 days later with the
renewal email, IF you are the admin email on the LE account.

Don’t get me wrong, I think there are great things about Let’s Encrypt, but it
has plenty of thorns to deal with. I’m glad we haven’t all been forced into
three month renewals by the CAB forum (since the certificate vendors have a
say and they got feedback from customers before agreeing). I am fairly annoyed
that Apple decided to unilaterally change the rules when they were already
part of an organization that deals with this topic. I can only imagine browser
vendors moving forward will have little to no concern for site/server
administrators and how their changes are affecting things.

As it stands now, we are at 400 day certificate lifetime, which means a bad
actor can only impersonate for a year, in the name of revocation being
performance prohibitive. This is effectively the same as three years from a
users perspective. The only meaningful change would be a lifetime of something
like a week or a day, but I shudder to think of all the ways that will fail
spectacularly.

------
Mister_Snuggles
I have my own mini-CA for internal stuff, built using the xca[0] tool with
certificates and private keys distributed manually. I usually make the keys
valid for two years so that I don't have to renew and redistribute very often.
Most of this started as a way to learn how this stuff works, but it's now
turned into a "production" thing as I've started using this to issue user
certificates for VPN authentication.

Is there any tool that I can use to help automate this in a reasonable manner?

Ideally, I'd love to see a web version of xca that supports ACME with some
controls on how ACME certificates get issued. Bonus points for supporting OCSP
as distribution of CRLs is another upcoming pain point.

[0] [https://hohnstaedt.de/xca/](https://hohnstaedt.de/xca/)

~~~
tialaramex
If you're moving Private Keys you are Doing It Wrong. This is very common in
VPN setups (and S/MIME) but still a terrible idea and worth taking the time to
figure out how you'll make sure you don't do this.

~~~
Mister_Snuggles
I'm definitely open to learning how to do things better, and I have no doubt
that some of what I'm doing is wrong - after all, this whole thing started as
a homelab-turned-homeprod learning experience.

I think the proper way to do certs is to have the server (Web, VPN, whatever)
create a certificate signing request and private key on the server, send the
signing request to the CA to sign it, and then install the resulting signed
cert on the server. Is this correct?

What I'm finding in some cases is that there are cases where this just won't
work. For example, my QNAP NAS allows me to either create a self-signed
certificate (I don't want this, I want it signed by my CA), get one from Let's
Encrypt (same issue), or upload certificate, private key, and optional
intermediate CA certificate files (and we're back to moving private keys).
This is a limitation of QNAP's GUI for sure, but it's not unique to QNAP.

Similarly, I'm not sure how I'd generate the certificate plus private key on
an iOS device and submit it for signing (the VPN scenario). This one
particularly bothers me because the .mobileconfig file ends up being the key
to the castle. Ideally I'd like the client to be authenticated with both a
user-specific certificate and EAP, but I don't think iOS supports this. I
haven't quite gone very far down this rabbit hole, so it's possible that I'm
missing something.

When I finally secure my internal web server (which acts as a reverse proxy
for all of my internal services), I'll try the CSR approach for the learning
experience. This approach should also work fine on my VPN server.

~~~
tialaramex
> I think the proper way to do certs is to have the server (Web, VPN,
> whatever) create a certificate signing request and private key on the
> server, send the signing request to the CA to sign it, and then install the
> resulting signed cert on the server. Is this correct?

Yes. The CSR is public information so it's fine to send that somewhere. "Sign
it" [the CSR] is a phrase that doesn't entirely reflect what's happening with
the relationship between a CSR and Certificate, experts know it's technically
wrong but they say it anyway, so don't sweat it but it's probably misleading.

I will try to circle back to see if I have any suggestions for your specific
scenarios later.

~~~
Mister_Snuggles
Thanks!

I'm going to try the CSR route the next time I have to do this and see how
that works out. xca seems to handle CSRs easily, so doing this less-wrong
should work out fine.

I'm not too concerned about the QNAP scenario - it's consumer gear, so I
expect it to lean more towards doing things easily over doing things
correctly. The iOS scenario is much more interesting to me since this is
something that's more applicable in the real world.

------
mooibos
I develop for Apple platforms, and it's absolutely mind-boggling how
frequently and regularly they break your code for no tangible benefit.

Hopefully their influence doesn't spread to the web too.

~~~
threeseed
There is a tangible benefit. For users.

64-bit only, Project Catalyst, BitCode, HTTPS only connections etc are
examples of initiatives which definitely has caused pain to developers but has
immensely benefited users as a whole. And if you don't passionately care about
users then frankly find another platform to develop on.

~~~
gsich
>64-bit only

how has that benefitted the user? Directly and measureably, no "it's 0.3%
faster now" excuses.

~~~
diebeforei485
Not having to load both 32-bit and 64-bit libraries into memory is an
efficiency (and battery life) improvement on the system as a whole.

~~~
gsich
That would fall into the "it's 0.3% faster now" category.

------
Santosh83
No joint announcement with other industry 'leaders' like Google and Microsoft?
What is their stance on this? Will they be making similar changes to
Chrome/Edge? And Mozilla? And maybe I am wrong but compromised certs are game
over very soon aren't they? Reducing their lifetime to 'just' a year is still
plenty of time to do enough damage seemingly, so what exactly does this high-
handed change bring to the table? Another reason stated that it will keep the
cert management folks busy and alert also sounds like grasping at straws to
prop up the decision. I wonder what the entire reasons are...

~~~
reaperhulk
The goal is to promote automation and continue lowering certificate lifetimes
as operations get better. This ultimately will allow for lifetimes short
enough to be useful.

As for the other browsers, Google originally proposed SC22
([https://cabforum.org/pipermail/servercert-
wg/2019-August/000...](https://cabforum.org/pipermail/servercert-
wg/2019-August/000894.html)) last year and all the browsers voted for it. CAs
voted it down at the time but there were rumblings via various back channels
that several major CAs actually wanted the ballot to pass but for political
reasons could not publicly support it.

So while Apple is acting “unilaterally” here, there is universal support among
browser makers and tepid support from CAs. You should expect Google and
Mozilla to follow suit in the next 6-12 months.

~~~
mamon
How will that automation verify that certificate is issued to the legal owner
of the web site and not a hacker? Are the challenges used by Let's Encrypt
secure? For me, automating certificate issuance will lead to less and less
verification, to the point where having a valid certificate will become
meaningless.

EDIT: to clarify - there are two bad things about Let's Encrypt:

1\. It's automated

2\. It's free

The fact that it's automated results in less human intervention along the way,
which on one hand lowers costs, on the other hand makes it detecting scams
harder (unless they deploy some really Machine Learning that detects frauds).

The fact that it's free means that there's no credit card number or other info
that would help identify actual person that requested certificate issuance.

Together those things make things less secure, not more.

EDIT 2: Both types of Let's Encrypt challenges look like pushing down the
responsibility to either web server owner or DNS service. Maybe that's a good
thing, since at least there's one fewer party that can screw things up.

~~~
brirec
I think you may not be very familiar with Let’s Encrypt’s challenges. Allow me
to briefly explain the gist of them:

The two most common challenges are an http challenge, and a DNS challenge. The
http challenge gives you a response code to host as a file on the domain
during the validation period. This challenge is, for all practical purposes,
random, and cannot be guessed. Then, after your script tells Let’s Encrypt
that the response to its challenge is available and up-to-date, Let’s Encrypt
performs an http GET request to retrieve that response, and checks to ensure
it is exactly what the script provided. Only then does it proceed with signing
your CSR and giving you a valid certificate.

This requires (at least temporarily) a web server running on port 80 at the
domain in question, and in order to break it you would need to be able to
effectively either hijack the A record for the domain as read by Let’s
Encrypt, or to break into the web server to properly issue a certificate that
one then steals. Impossible? Probably not. Impractical? Very.

DNS challenge is even more secure, in my opinion, as it works the same but the
response code is stored in a TXT record for Let’s Encrypt to validate. In
order to break this you would need control of the DNS servers.

So, to put it rather simply, >Are the challenges used by Let’s Encrypt secure?
Yes, so long as you trust your DNS and web servers not to be compromised. And
if they are, it’s frankly game over anyway.

Now let’s contrast this with, for instance, getting a multi-year certificate
from the likes of Verisign or similar: this (as far as I am aware) requires
manual interaction, which can at least theoretically allow for human error, of
which there are many chances.

Additionally, many more traditional CAs will let an inexperienced user have
the CA generate the private key and then transmit it to the user. This opens
up a LOT of dangerous possibilities, as now this private key is being saved
and moved around, and could easily be missed and left on the workstation used
to perform the work. Or a MitM attack could even snatch it in transit.

Honestly, I don’t think there is much (if any) point in still using manual
verification. The human aspect of it also opens up chances for forgery, and so
on.

Let’s Encrypt’s challenges are specifically designed to be difficult or
impossible to hijack, and so far as I understand it the private key should
never leave the server it will remain on.

So again, to answer your question succinctly and to the best of my knowledge:
yes, the challenges used by Let’s Encrypt are most certainly secure.

~~~
xg15
_> DNS challenge is even more secure, in my opinion, as it works the same but
the response code is stored in a TXT record for Let’s Encrypt to validate. In
order to break this you would need control of the DNS servers.

> Now let’s contrast this with, for instance, getting a multi-year certificate
> from the likes of Verisign or similar: this (as far as I am aware) requires
> manual interaction, which can at least theoretically allow for human error,
> of which there are many chances._

What I've never understood is how this doesn't ultimately just shift the
security risk to my DNS registrar.

Instead of social-engineering the CA to give me a cert, I have to social-
engineer the registrar to store a TXT record. I don't see why one should be
significantly harder than the other.

 _> Additionally, many more traditional CAs will let an inexperienced user
have the CA generate the private key and then transmit it to the user. This
opens up a LOT of dangerous possibilities, as now this private key is being
saved and moved around, and could easily be missed and left on the workstation
used to perform the work. Or a MitM attack could even snatch it in transit._

Again, it's harder for a rogue CA to abuse my certificate - but instead, a
rogue registrar could now easily manipulate my DNS record and receive a valid
cert of its own.

~~~
castillar76
> What I've never understood is how this doesn't ultimately just shift the
> security risk to my DNS registrar. > Instead of social-engineering the CA to
> give me a cert, I have to social-engineer the registrar to store a TXT
> record. I don't see why one should be significantly harder than the other.

Rogue registrars aren't necessary: one can simply attack the registrar or
interfere with the DNS traffic. These attacks have already been seen in the
wild and are continuing today: check out info on DNSpionage
([https://blog.talosintelligence.com/2019/04/dnspionage-
brings...](https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-
karkoff.html)) and Sea Turtle
([https://blog.talosintelligence.com/2019/04/seaturtle.html](https://blog.talosintelligence.com/2019/04/seaturtle.html))
attacks.

Meanwhile, Let's Encrypt has been making some interesting changes. For
instance, they just introduced multi-perspective challenges
([https://letsencrypt.org/2020/02/19/multi-perspective-
validat...](https://letsencrypt.org/2020/02/19/multi-perspective-
validation.html)) in which they submit multiple challenges to the user from
different network paths. Attackers hijacking network paths to interfere with
challenges must then intercept all possible paths to a client, which is much
harder.

That said, I'm not a fan of devolving our certificate validation to DNS--it's
like building a castle on Jello. It wasn't designed to be a security-first
protocol, and it's definitely showing its age.

------
lisper
IMHO, expiration dates on certificates are and have always been the Wrong
Thing. The Right Thing is to have the certificate contain a time stamp of when
it was issued. The client should decide whether the certificate is still
trustworthy. The cert can contain a recommended expiration date, but the
dispositive information should be the issue date.

~~~
abdullahkhalids
Can someone chime in with the original intended purpose of the expiration
date?

The one that I can imagine (without research) is that the issuer knows the
quality of their own security practices, and if they says that a certificate
will expire by X date, they are saying that they can't guarantee that they
will still be the only person with the secret key after that point.

Any other explanation?

~~~
wool_gather
> the issuer [...] are saying that they can't guarantee that they will still
> be the only person with the secret key after that point.

It's not quite clear what you mean here.

The issuer should _never_ have the private key for the end-entity cert (the
one representing the domain or whatever). Only the owner should have the
private key.

EDIT: Hmm, according to another comment here, some CAs will offer to generate
both halves of the keypair for the end user. Yikes.

If you are talking about the CA's own private key...if that _ever_ leaks, then
_no_ certificates signed by it should ever be trusted again; expiration date
has no relevance here.

~~~
tialaramex
> EDIT: Hmm, according to another comment here, some CAs will offer to
> generate both halves of the keypair for the end user. Yikes.

I wasn't able to find this comment, but perhaps I didn't look hard enough?
EDIT: Wait, I think I found it. I will reply there.

In the Web PKI this is prohibited _but_ there are or at least were resellers
who'd offer this service to their customers. You should not use this service
of course, and some CAs have pledged to tell their resellers not to offer it.

In S/MIME it's more common because often the end user is both technically
unsophisticated and not given real control over their client in order to mint
their own keys anyway. But S/MIME is... probably not important, certainly it
isn't what Apple's change is about.

------
nicolas_t
While I agree that it is better from a security perspective not to issue long
lived certificate and that automation would be best, I dislike the fact that
the subdomain of any certificate I issue becomes public due to Google's
certificate transparency project.

By making every subdomain public, it makes the job easier for any attacker
wanting to try and find smaller servers to target. It's not that I believe in
security through obscurity but besides making sure all servers are as secure
as they can be, I do believe in not making the job easier for adversaries.

So, instead I use wildcard certificates, and for this automation gets much
more annoying, you need to use DNS to validate it (route 53 or similar does
provide an api that can be used) and then I'm not sure if I'm confortable
having each server generate their own wildcard certificate, leading to 100s of
wildcard certificates...

This is why I currently use old style 1 year wildcard certificate which gets
updated through chef but I'm really not sure if this is the best solution or
not.

~~~
iso1210
Without certificate transparency how do you know nobody has issued a
certificate for your server? Surely that's a far higher risk than knowing a
domain?

~~~
nicolas_t
oh, I completely agree that certificate transparency is beneficial for that
and we have an alert setup. But, that still leaves me not wanting to leak
every single subdomain we use internally. Hence using wildcard certificates.

~~~
tialaramex
CT didn't change this for bad guys. If you're a bad guy (or a neutral
researcher with a budget for the data) you can buy what's called "Passive
DNS". Several suppliers will give you a list of DNS requests and their
answers, the identifying information for who made the requests is elided so
it's not PII but it has the same effect of making the fact
servername.example.com exists in effect public information.

Even if you are unusual in actually having machines named cy23hdc9.example.com
not exchange2016.example.com then the existence of this service means you need
to stop assuming nobody knows these names. Anybody who cares knows them.

~~~
xg15
Yes, which is why you used to be able to have fully internal domains that are
served by an internal DNS server and are never seen on the public internet.

~~~
castillar76
You can still have this: it's called 'split-horizon' or 'split-brain' DNS. It
can be tough to set up and maintain for large DNS environments, but not /that/
difficult even at that level. If you do this and you don't want those names in
the public CT records, though, you have to implement a private internal PKI
for those servers so they're not getting public certs. Or use a wildcard,
which carries its own significant security risks.

~~~
xg15
> _you have to implement a private internal PKI for those servers so they 're
> not getting public certs_

This would also involve setting up a custom CA and distributing your CA cert
to all machines that should be able to access the page. Good luck with that!

All of this is a ridiculous amount of effort to set up and maintain. It's a
lot easier to just make the internal domains accessible to the internet and be
done with it.

The effect being that networks are being nudged into exposing a lot more
surface to the internet - and this somehow in the name of more security.

~~~
castillar76
It's not terribly painful to do if you have central management (e.g. you can
push them out through a GPO or MDM system). For small businesses, however, I
totally agree it's a pain in the neck: the school my wife teaches at would
have a devil of a time doing it with their current IT staff workload, and
they'd still have issues with unmanaged devices. And all that's to say nothing
of the ticking time-bomb that occurs when you set up your own PKI--good luck
remembering to replace that root CA ten years down the line when it explodes.

The cynical part of me says that certain companies might be very interested in
the search possibilities gained from exposing internal networks to the
Internet, and the increasing lock-in that occurs when you make your systems
dependent on their public CA instead of your own private one. But perhaps
that's just the tinfoil-hat talking. :)

------
stygiansonic
I understand the reasons behind wanting to shorten certificate validity
periods, but CA or root certificates often have expiration periods far into
the future. What’s the argument for this? Ease of use? Historical reasons?

~~~
jakub_g
It's not easy to (at least this was the case until a few years ago) to ship
updates to an old device (think Android 4.x or Windows XP; even worse for
embedded systems). Hence to avoid the devices become useless bricks even if
otherwise fully functional, root certs need to have more than a decade cert
validity at minimum. (That's my personal theory, I'm not in the industry).

~~~
userbinator
I think the industry is very much trying to push for the exact opposite of
that, with things like planned obolescence to ensure you keep consuming their
latest products.

------
cutler
Just look at the state of the "modern" internet with it's certification wars,
2-level auth, Google SMTP gatekeeping, mandatory HTTPS, Letsencrypt v2 &
certbot renewal fsck-ups. What hope in hell does the ordinary user have of
navigating all this? Do we all now need to be experienced sysadmins just to
use the internet?

------
enigmabridge
Apple Drops SSL/HTTPS Bomb - Forget Long Certificates

[https://keychest.net/stories/apple-drops-sslhttps-bomb-
forge...](https://keychest.net/stories/apple-drops-sslhttps-bomb-forget-long-
certificates)

------
OrgNet
Google doesnt trust my password for a gmail account that I last used 6 months
ago (I had to also provide one my previous password to be able to login). It
is getting ridiculous...

Now I gotta keep an history of passwords, just for Google...

------
moneromoney
Apple's browser is already called "the New Internet Explorer":

[https://fabiofranchino.com/blog/css-height-parent-flex-
safar...](https://fabiofranchino.com/blog/css-height-parent-flex-safari-
issue/)

[https://dev.to/nektro/safari-is-the-new-internet-
explorer-1d...](https://dev.to/nektro/safari-is-the-new-internet-
explorer-1df0)

[https://arstechnica.com/information-technology/2015/06/op-
ed...](https://arstechnica.com/information-technology/2015/06/op-ed-safari-is-
the-new-internet-explorer/)

... and millions similar articles

I developed widgets for web developers and I had to change it to stop using
the 'fixed' positioning anywhere, because iOS is the only browser that
interpret it differently on all touch enabled devices. They have many other
issues as well. I have to spend half of my time to fix their buggy
browser!!!!!!!!! Not even the old internet explorer causes that many issues as
safari.

They torpedoed progressive web apps and many important web standards. Why?
Just because they want you to use their apple store, where they can take 30%
from every transaction without doing anything.

I hope Apple will go bankrupt and stop hurting the web community as they
successfully did in the past years.

------
gsich
Safari market share ... so who cares.

I am not talking about mobile usage.

------
X-Istence
I posted about this a couple of days ago:
[https://news.ycombinator.com/item?id=22373673](https://news.ycombinator.com/item?id=22373673)
didn't get much traction then :-(

------
dboreham
Is this an early April fool article?

~~~
tambre
What do you find strange about this?

~~~
dboreham
That the CAs are happily still selling 2yr TTL certs?

------
z3t4
I don't think certificates are a good solution for most websites. Instead have
the browser store the public key first time you visit the site. Then ask the
user every time it changes, like with SSH. Also browser should send their
public key to the server! So that we don't have to come up with a new password
for every damn site.

~~~
jaywalk
>Instead have the browser store the public key first time you visit the site.
Then ask the user every time it changes, like with SSH.

The vast majority of users will not understand this. Also let's say you go to
a site one day, and you get a prompt that their certificate has changed. Is
this legit? How can you know?

>browser should send their public key to the server!

This already exists (HTTPS Client Certificates) but it's a huge pain in the
ass so it's barely ever used. When it is used, it's generally within a
controlled corporate environment.

~~~
z3t4
There could still be a trust chain. But most websites don't need it, just like
most websites only use the basic level of certification today, all they really
want is encryption. (or the green padlock :P)

Client certificates has kinda been deprecated by browsers. Which is why it's
such a PITA. Also handling signing certificates by itself is a PITA. But most
website's wouldn't need signing, just encryption.

Automatic SSL certificate signing is not that secure, if an attacker would be
able to mess with DNS, or access the HTTP server, they could also create a
fake certificate via Letsencrypt. Or if the attacker has access to the client,
they could sneak in a root certificate. Most nation states and ISP (those who
would like to spy on it's citizen) already have root certificates. All that
SSL certificates do is to create extra work for site maintainers. We can have
encryption without certificates. If you would for example side-load a list of
public keys for the main sites you visit, that would be much more secure then
SSL certificates are today.

