The issue lies between the browsers and https system. SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?". Then if it changes informs you of that. Browsers could easily implement that for .local with self-signed certs.
Of course browser developers assume everyone has internet all the time and you only access servers with signed domains. I’ve wondered what it’d take to get an ITEF/W3C RFQ published for .local self-signed behavior.
(Edit: RFQ, not my autocomplete’s RTF)
For these types of sites we run a local CA, and sign regular certificates for these domains and then distribute the CA certificate to our windows clients through a GPO. When put into the correct store, all our "locally-signed" certificates show as valid.
In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.
This is basically what I've been doing lately as well. I'll create a wildcard letsencrypt cert for .vpn.ourdomain.com and then point the subdomains to internal IPs. You can even set up a split-dns where it responds to the challenge txt records for letsencrypt, but only the internal side responds to requests under .vpn.ourdomain.com.
Asking the end user to accept downgraded security is a huge security antipattern.
Also, if I’m operating an evil wifi AP at a coffee shop and I intercept your web request for bankofamerica.com with a redirect to bankofamerica.local, would HSTS prevent the redirect? Or could I then serve you a bad cert and trick you into accepting it?
Also, what sokoloff said makes a lot of sense. Encryption without authentication is worthless, and that cert chain only works in so far as someone at the top vouches for someone’s identity. If that’s your print server, then you are the one vouching for its identity. It makes more sense for you to be the certificate authority and just build your own cert chain.
“You’re connecting to an IoT device that has a worthless certificate. Would you like me to open up a completely pointless AES256 session with it and pretend that you have a secure connection?”
Just use HTTP.
You’re effectively claiming SSH is pointless and/or useless encryption as it doesn’t use certificate chains to verify url/domain. Your argument is the same as saying that any devops ssh’ing into a new local server is pointless and they should just use telenet.
Ideally, I think, something like this: "You're trying to connect to a new device on your local network. To ensure the security please check that the device has a display or a printed label that says 'HTTPS certificate ID: correct horse battery staple couple more random words'?" (mobile devices may suggest to scan a QR code instead).
I'm pretty sure if at least one major browser vendor would implement something like this (denoted by a special OID on the certificate), IoT vendors would be happy to follow. Verifying a phrase or scanning a code is not a big burden, and it resolves trust issues.
The fingerprint could be either from a private key generated on device (for devices that have a display and can display dynamic content) or from vendor's self-signed "CA" with special critical restrictions (no trust for any signatures unless individually verified + signed certs are only valid on what clients consider to be a local network) which private keys are not on the device itself (for devices with printed labels, to avoid having the same private key on all devices).
I’m still wary of any flow that would have browser users “accepting” a device as secure - could I impersonate that device on the local network? Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.
Maybe another approach would be to build infrastructure (like protocols and client software) to make building a home cert chain easy? A windows client that would let you create a root cert, install it in your cert store, and then give you server certs to hand out to devices? Give it a consumer friendly brand name or something and get IoT vendors to add a front-and-centre option to adopt a new server cert.
Authentication isn’t a tricky problem; it’s the trickiest.
It’s about accepting that communication with the device is secure, not guaranteeing that the device itself is secure. In reality you don’t know if your bank’s servers are secure or if they encrypt passwords properly, etc, but you do know it’s them and your communication isn’t tampered with.
> could I impersonate that device on the local network?
Not readily with a device specific id check and TOFU (trust on first use) similar to SSH. If the device certificate was stored permanently for .local urls like `my-device-23ed.local`, then anyone who tried intercepting or MITM’img that device would have the user receive a message "warning device identity has changed, please check your device is secure ... etc " warning.
Not having any browser support .local certificate or identity "pinning" means that anyone who compromised your network (WiFi psk hacking anyone?) can impersonate a device you’d not know it. Browsers forget self-signed certs regularly, if they let you "pin" the certificate at all. A hacker can intercept the .local url (trivial) and use another self-signed cert. the user’s only real option is to blindly accept it whenever it happens. Then an intruder can MITM the connection to the device all they want. Is your router’s config page really your router? Who knows.
> Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.
Any `.local` domain isn’t allowed to have a normal cert or global dns name. They could trick them on first use, but again with a device specific ID on first use it’d make that harder to do. After trust-on-first-use any access afterword wouldn’t be able to be tricked without a explicit warning to the user about changing device identity and that something funny might be happening.
If browsers implemented entering a device specific code as part of the "do you accept this device" on first use, that’d make it a much more usable and secure pattern. It’d standardize the pattern and encourage IoT shops do setup the device id checking properly.
To impersonate a device using .local certificate/identity pinning, a hacker would need physical access to the device to get it’s device id code, then hi-Jack the mdns request with the correct device specific .local address (on first use!), then setup a permanent MITM in order to impersonate a device. Otherwise the user would get a warning. Possible but serious resources required. With physical access you can modify hardware, possibly install a false cert on the user machine, etc, so security in that scenario would be largely compromised already.
Perhaps some IoT devices use custom apps and certificates but many just use http, or self-signed https. In my experience, IoT device makers have little experience with something like creating a CA. Getting users to install it would be a headache. Time is money on those projects and having an entire factory down because they can’t figure out how to install a certificate chain on Windows 7, well, most users will complain loudly. Currently IoT is full on IT-installing-certificate-chains, or no security at all. Many go with none at all therefore.
Therefore the current status quo with browser certificates on .local domains encourages far more security gaps and effectively makes it difficult for non-internet connected device to Operate securely without a fairly expensive and complicated IT setup.
My browser warns me, I can accept the warning for that particular certificate, and it warns me again if it changes..
Chrome and Firefox remember the acceptance of self-signed certs for a long time on my PC.
It took my a little over an hour one evening to figure out how to create my own CA, trust it, and sign certs for all my local devices (except my UniFi cloud controller which I admit I gave up on due to time).
The argument here is that we should enable lots of shitty IoT devices to masquerade as being secure, and inure browser users to click ‘yes’ to accepting a broken certificate.
If it’s on a managed network, IT can set up a certificate and push that out to client machines. If it’s on your home network you can do that (unless your IoT device can’t take a user configured client cert, in which case it’s rubbish anyways), and if you can’t then you might as well use HTTP.
The average user knows absolutely nothing about routing, most will throw their hands up or their eyes will swim if you so much as mention something like IP address.
They also know nothing about DNS and don't have to: because we always give them defaults that they never see and they go along with their lives.
As for wifi, once again, largely automated. Most people never change the default SSID and password. There's some manufacturers that will make a good UI, but it stops at the SSID and passwords because that's the extent of most users' understanding. Some users have a vague understanding that 2.4GHz and 5GHz is different, but don't know the significance of the difference. Channel, authentication type, and other options aren't given to users in those UIs because they simply wouldn't know what to do with it and people don't read manuals anyways.
The problem is to figure out whether to trust the server you need to get its fingerprint through another channel. Is there an HTTPS equivalent of that?
That's basically how it works though; your OS packages a group of trusted CA certs. You can add additional trusted CA certs, even ones minted by you to ensure your apps trust the connection
* Manually install a root certificate, which is a confusing process for most end users and a non-starter for anyone who cares about security. (Imagine walking your parents through the process.)
* Trust a self-signed certificate, which is an increasingly difficult and counterintuitive process since Chrome and Firefox started competing to see who could destroy their usefulness faster. I'm not even sure if it's possible anymore.
Neither of these are acceptable.
If you are doing something for an end user, I think it makes a lot of sense just to get a certificate; it's just not a large barrier anymore.
It is different from the CA PKI system, where the client trusts any certificate signed by a trusted CA without prompting the user at all, and doesn't prompt the user if the certificate for a site changes.
I'm not sure self-signed HTTPS can do much better than this anyways.
(Yes, yes, it's a crazy idea, hehe)
Self signed HTTPS works for this case as long as you know the fingerprint/cert to accept.
So you would need to ship a crypto library in JS, hehe :)
Self-signed certs probably does work, if you install the certificate root on your machine. It just not something you would advice end-users to do.
Sorry for the mostly insubstantial comment but it may help you in the future: it’s RFC (Request For Comments) not RFQ.
And it’s IETF (Internet Engineering Task Force) not ITEF.
You can also get a certificate through the Let's Encrypt DNS challenge without having to expose a server to the Internet, but you'll still need ownership of a domain name and either an internet connection or a local DNS server to support HTTPS using that certificate.
There is always the option of creating a local certificate authority for your devices, but this is kind of a pain. There are some new applications that aim to make this easier , but there is no easy way around having to install the root certificate on each device.
For example, you could have serialnumber.manufactuerer-homedevices.net, and each device would get a cert for its serial's host name. Ideally, you should properly secure that API with some form of attestation key included on the device. Alternatively, the host name could be e.g. the hash of the devices' generated key (that way you could ship the devices without placing individual keys on them, but the host name would change after a factory reset).
Making this actually secure is hard, though, because you need the user to visit the URL for his device. If an attacker can simply get a cert for differentserial.manufacturer-homedevices.net and direct the victim there, you don't win much actual security.
Your device connects out with some kind of persistent connection to their central service then requests to your device go to their server, which does AAA and routes to your local device. Fixes the SSL issue, avoids any NAT headaches, enables fully remote access and most importantly for PMs it makes the device useless without your server-side components. If there is any local accessibility at all, it can be neutered or reduced.
I don't entirely hate this model, its not my favorite, but, its the way things are going.
Why would I even bother copying and distributing self-signed certificates if I can just properly get a certificate for my own personal router?
It’s idiotic that people still trust pure HTTP and have no option of switching.
Or if you change ISP and need to change your router internet connection configuration, your router cannot be accessed.
I understand if that router is something industrial, but then you can probably figure out how to do that over SSH anyway (which is secure).
It requires people to care more about "self hosted" than "PM says this will centralize user access and allow us to collect data and better monetize"
I'm not knocking either model. They both work (technically) but you need to understand your market and what works better for them.
If your vendor device or software doesn't support automated certificate rotation, put nginx/haproxy/envoy in front of it.
The only way I see how this would work is if you not just purchase a domain but also an internet-facing server and do the renewal and certificate management centrally for all devices - at which point, your device is definitly not standalone anymore.
Plex does this, for example, though they use DigiCert's free certificates: https://www.plex.tv/blog/its-not-easy-being-green-secure-com...
https://letsencrypt.org/docs/certificates-for-localhost/ has great documentation on that topic, including more examples.
* Every machine in your infra already has backups, right? Nothing about your signing boxes are special in this regard.
* All your services are already HA, right? The API servers that now have to run some glorified OpenSSL commands aren’t any different than your normal API endpoints.
* You already have to protect secrets on your machines. DB passwords, API keys. What’s one more?
* You don’t have to implement ACME. These are your devices talking to your devices.
There remains the question how I would get the CA certificate onto client devices in the first place.
Lastly, with asking consumers to install a CA certificate, I ask for a significantly more powerful permission than if I could just have them trust my certificate. This seems like a step backwards security-wise.
CA certificates can be constrained. https://tools.ietf.org/html/rfc5280#section-188.8.131.52
When I tried to use this many moons ago, most things ignored the constraints; although I could mark the extension critical, and then some (but not all, yay) of the things that didn't understand would refuse the CA.
As does webpki:
But haven't tested it (or checked other libraries).
The device will also only be accessible if an internet connection is present, even if both the device and the client are in the same LAN - because the client has to access the device through the domain.
This means, should I ever lose the capacity to support the device and renew the domain, the device will become useless, even if technically, it is still completely functional.
That’s not true at all. I’ve created a CA and a script to generate and sign server certificates and I generated them left right and centre now for my very standalone, local network only with no access to the internet whatsoever services. I added my CA to my browsers and my iPhone and everything works perfectly.
I'm thinking of an example to illustrate what I mean. (Sorry if this appears to be moving the goalposts)
Imagine some small business is selling a home surveillance camera, or a network printer or whatever else. The thing is that it's a product intended for perivate, layman consumers and intended for LAN use.
With HTTP, you could add a local web server as a simple way to manage the device pretty easily: Just open a server, communicate the IP address to the user, done. No internet connection required, no continuing support from the company required. Even if the company went bust, the existing units continued to work and the web interface stayed accessible.
There seems to be no good way to replicate this with HTTPS. The closest seems indeed to be a custom root CA - however, then you need to communicate to your users how to install the CA certificate on their own devices, clicking through all kinds of scary warnings and dismissing "this section is for admins only" notices. I predict that not a lot of people would do that.
This also leaves you with the challange to safely get the certificate to your users. You could serve the certificate from the device over HTTP - however, then you'll require that your customers download a root certificate, over an unencrypted connection without any integrity checks and install it on their device. This seems like ripping open a mojor security hole.
Meanwhile, even if the company purchases a domain and attempts to get a certificate from a public CA, deployment will be difficult as described in all the other branches of this thread.
In short, I think you can pick any three of the following four conditions, but I see no way to archieve all four at the same time.
(1) use modern web features (all recently added and all future features require https)
(2) have your site usable on a client device that does not belong to you
(3) present a non-confusing user experience (no cert warnings, etc)
(4) have the device stay accessible even after you stop actively supporting it (by purchasing domains, running cloud services, having deals with CAs, etc etc)
Because the hardware vendor does not own nor configure the private network, they are not able to certify to the network’s users that a particular network node is the device it’s supposed to be, and not an impersonator. Only the network administrators can do that, and so it is the network administrators that must generate the certificate and install it on the device. In this way the admins bestow a programatic declaration of trust on the network node.
The device manufacturers can only provide tools for showing that the device was not tampered with. TLS/SSL certificates are not for that purpose.
I'm wondering where the impression fo" not any more" comes from.
Really the situation hasn't changed much. You can have your HTTP webinterface. You can have HTTPS with a selfsigned cert and click away the warning. The only thing that really has changed is that for your HTTP connection you will get a warning that the connection is not secure.
I don't think the ability of browsers to load HTTP pages will go away any time soon.
xg15 is going to have to run a self-hosted Certificate Authority (CA) and generate certificates himself.
This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local." is link-local, and names within this domain are meaningful only on the link where they originate. [...] Any DNS query for a name ending with ".local." MUST be sent to the mDNS IPv4 link-local multicast address 184.108.40.206 (or its IPv6 equivalent FF02::FB).
I'd recommend using something like .lan instead.
Using an unregistered domain like .lan has serious security implications. See here: https://serverfault.com/a/17566
Personally speaking, I'm not too worried about .lan getting registered as a gTLD anytime soon. I'm a lot more worried about forgetting to renew my domain and having things horrifically break if/when that domain gets picked up by someone else. This is a lot more likely...
It would be enough to send it as an intermediate CA cert, no need to install.
Going the self-signed DNS name restricted CA way would likely still not fly with browsers, because there's no way to securely deploy the trust root. (Because if it requires user interaction to install that can be exploited by malicious actors.)
Both Chrome / Chromium and Firefox have explicitly set policy that new features (as opposed to tidier ways to do things that already exist like DOM improvements) will require Secure Context, and there's already a weak assumption that even some tidying up will go into secure context when the rationale for not doing so is shaky (e.g. some of the web crypto features that needn't technically require Secure Context do anyway).
Why use slightly compromised HTTPS versus plaintext HTTP? Same reason they have those super cheap locks on diaries from the 90s: it's a deterrent. Makes it a little harder to do a bad thing.
You are missing what happens instead. There is just simply no web management interface on the device anymore. You need to download the vendors app to configure and use the device. Maybe, if the vendor cares, they use their own CA to secure a local connection to the device. Much more likely, the app and device exclusively talk to their cloud and use that as a middleman to exchange information.
> SUB ITEM 3.1: Limit TLS Certificates to 398-day validity Last year there was a CA/Browser Forum ballot to set a 398-day maximum validity for TLS certificates. Mozilla voted in favor, but the ballot failed due to a lack of support from CAs. Since then, Apple announced they plan to require that TLS certificates issued on or after September 1, 2020 must not have a validity period greater than 398 days, treating certificates longer than that as a Root Policy violation as well as technically enforcing that they are not accepted. We would like to take your CA’s current situation into account regarding the earliest date when your CA will be able to implement changes to limit new TLS certificates to a maximum 398-day validity period.
CCADB is a totally different service run by Mozilla and Microsoft (using Salesforce, I presume because they both agree this is terrible but neither can accuse the other of using their preferred pet technologies?) notionally open to other trust stores to track lots of tedious paperwork for the relationship with trusted CAs. Audit documents, huge lists of what was issued by who and to do what, when it expires, blah blah blah. Like a public records office it's simultaneously fascinating and a total snooze fest. Mozilla is using it in this case to conduct their routine survey of CAs to check they understand what they're obliged to do, they're not asleep at the wheel and so on.
Anyone using email validation now needs to click a link every month, or their cert goes away.
I used to have the unfortunate task of managing a massive SAN cert used for white-label hosting with a bunch of our customer's domains.
Getting every single customer to get their tech person to look at the mailbox and click a link was often a multi-month process.
If you're in a position to MITM using a stolen certificate, you're probably also in a position to block the CRL response from going through. Since failing to get an updated CRL doesn't result in a security warning, your CRL proposal is essentially useless.
Not if the certificate is OCSP-Must-Staple.
Short cert lives make certain decloaking much, kuch more difficult.
It seems like driving this number up is a better way of dealing with historic traffic than quickly expiring certs. Limiting the duration of leaks of future traffic seems like the right justification for short lived certs.
That recent GnuTLS bug resulted in bad guys not even needing to steal that resumption key for any servers using affected versions of GnuTLS because GnuTLS was just initialising it to zero...
Digicert is in the process of migrating their customers to ACME (the issuance protocol used by Let's Encrypt and certbot). Where's your god now? :)
I can forsee the browsers eventually treating self-created CAs like they currently treat self-signed certs. if they're not traceable to a trusted root CA then there's no accountability, from a browser perspective, in the event of abuse or breach.
Without centralization I can MITM at the coffee shop and steal passwords.
This is so an arbitrary decision and so much a pain in the ass. Again, a limited number of people used their corporate interests to decide for the whole world with almost no discussion.
The worst is that the "security" argument for this change is quite weak. Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.
Now, you as an user are so stupid, that browsers will decide for you what website is deemed safe for you to visit, the same as with appstores.
Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust:
google.com vs myshaddyfraudyweb.com
This is a clear security win, and thus good for users. And no, I don't trust websites to have my best interests in mind, not remotely. Hell, if browsers hadn't started warning about insecure connections then I suspect that even to this day most websites would still be insecure. We used to leave it up to the choice of each website, and that was a clear failure, and now they're being forced to provide better security, which is a clear win.
Moreover, this didn't come from CA/B anyway, it was rejected there. CA/B agreed the previous 825 day limit, and the 39 month limit before that, but this new rule did not get support at CA/B so Apple imposed it unilaterally (and with some really poor communication but whatever).
Google and Mozilla have just decided that since they wanted this limit, and Apple has effectively imposed it anyway, they might as well go along for the ride.
People can barely tell whether it's really microsoft calling them saying their computer is infected. What makes you think they'll be able to tell the difference between google.com and google-secure-login.com, or whether they should download the "codec pack" that their shady streaming site is offering?
That sounds like a disagreement; it benefits the user, so let the website opt out? Because websites are known to have users' well-being in mind?
I would think the choice on how long to trust a certificate should be on the user, possibly using the hint that the creator of the certificate gave. You wouldn’t trust a certificate from evil-empire.com, no matter its expiration date, would you?
The discussion should be about whether the browser should make that decision on behalf of the user. I’m not sure I’m in favor of that. On the other hand, browsers already do a lot in that domain, for example by their choice of trusted root certificates (and changes to that list)
So in the end, websites determine their 'trust value' without the browsers 'police', that will let the possibility for special cases.
For example, if I do a device that is to be used out of internet for 3 years, logically the user will not see an issue with a 5 years certificate.
So bad those times are over and we have this browser cartell enforcing some basic security standards for TLS. Screw them!
We got plenty of gradual improvements over time. Validity time does not stop incidents, but it makes the impact smaller and allows ecosystem improvements to propagate faster.
Take for example Certificate Transparency, which is one of the most important ecosystem improvements. It was required for new certificates in 2018. But we still can't rely on Certificate Transparency logging for all certificates, as the certificate lifetimes were so long.
In the future such improvements will take maximum 1 year till all certificates have them.
It's an uphill battle but I'm glad browser vendors are fighting it.
// For certificates issued on-or-after the BR effective date of 1 July 2012:
// 60 months.
// For certificates issued on-or-after 1 April 2015: 39 months.
// For certificates issued on-or-after 1 March 2018: 825 days.
// For certificates issued on-or-after 1 September 2020: 398 days.
Does that mean that next May, for the first time ever, the domains of all HTTPS sites on the web will be recorded in a public log? I think the only caveat to that is wildcard certificates.
Although the Chrome mandate only technically kicked in on 30 April in practice most CAs were considerably ahead of that date, in addition some of the logs are open to third parties uploading old certificates, Google even operates logs that deliberately accept certain untrustworthy certificates, just because it's interesting to collect them.
If you're excited to know what names exist, the Passive DNS suppliers can give you that information for a price today, their records will tell you about names that aren't associated with any type of certificate, and lots of other potentially valuable Business Intelligence. They aren't cheap though, whereas harvesting all of CT is fairly cheap, you can spin up a few k8s workers that collect it all and store it wherever (this is one of the tasks I did in my last job).
The CABF has talked about doing this before, most recently in SC22 (https://cabforum.org/2019/09/10/ballot-sc22-reduce-certifica...). In that case all browsers supported it, but it wasn't passed by the CA side.
It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.
Without automation, you've got an annual chore to do or your site goes offline.
I think some hosts are already starting to offer free and easy SSL certs to their small customers, but I do expect automated SSL management to be generally available for the masses before this takes effect.
Much better to have a separate central cert management system that handles renewals and pushes the certs outwards to the DMZ systems.
You can also use it as a certificate manager independently of a web server if you want.
How is HTTP harmful when you visit my website about amateur radio? An expired cert is no more harmful than bare http in this non-commercial non-institional personal context. It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.
The burden is real and completely unecessary for personal websites. This makes the web more commercial by imposing commercial requirements on everyone.
It's what killed off self-signing as a speed bump against massive surveillance and centralized everyone into the benign dictactorship of letsencrypt. But centralization will lead to problems when money is involved. Just look at dot org.
The real harm comes from this fetishism of commercial/institutional security models.
"Unharmful" HTTP sites are used to silently hack people's computers and keep them under observation for months. Every unsecured site contributes their small piece to keep the web unsafe for people who needs it to be safe.
If your threat model includes nation state attacks you're gonna have problems no matter what. Change your personal behavior accordingly. Don't tell everyone else they need to wear bullet proof vests around the house and hire corporate security goons. They don't and doing so is burdensome.
So, in a way, it's probably just a matter of time that the kind of silent hack depicted in the Amnesty article is used for attacks targeted towards more general victims. I don't look forward to the day that just by reading an unprotected HTTP site is enough to get my phone compromised as part of a widespread scamming effort from someone trying to get credit card details or banking stuff... but it will propbably end up coming if we don't move all together for a more secure WWW.
They aren't problems with security in HTTP versus HTTPS for a personal or small business static website.
Attacks on HTTP sites are known threats that we have evidence for, they aren't ridiculous or unheard of. The defense is not "everyone where a bulletproof vest", it's get a certificate and set up HTTPS - a one time cost that will protect thousands of people.
You're making the choice for your users, who may not be as informed as you are, to not protect them. That's very different from asking them to wear a bulletproof vest.
There was already a fetishism of commercial/institutional security, and LetsEncrypt gave it quite the blow. Now companies that you used to have to pay a yearly fee for a certificate are offering their certificates for free.
It does stink that corporations have to be in the middle in the first place, but that's due to the difficult problem of "trust." I'm not sure it's possible to decentralize it, besides some sort of blockchain solution that would be unworkable in the real world.
I did miss that but I did not assume a business context.
> The burden is real and completely unecessary for personal websites.
Users who visit your website are still at risk of having their connection hijacked - they could be phished, exploited, etc. This is maybe not something you consider important, it is certainly a sort of "boil the ocean" approach, but given the efforts put in up until this point I think it's already the case that most users are probably not visiting HTTP sites on the average day. Continuing that effort seems reasonable.
> This makes the web more commercial by imposing commercial requirements on everyone.
I'm not sure what you mean.
It is extremely uncommon for me to actually visit an HTTP website - I even have HTTPSEverywhere block them by default, so I'd know if I were. That means that I am relatively protected to such avenues until I visit your roof repair guy's website.
More to the point - if I am running a collection of Karl Marx works it is highly unlikely that he would request payments.
Regardless of the content, hijacking is a danger to users.
It isn't my business so I've done nothing to reach out to your users or interfere in your website. We're having a discussion about technology on a technical forum.
It is the browser developers' business though since they are tasked with protecting users from these specific threats.
And are you blocking all Web traffic except from people who your users, somehow? If not, then everyone is your user.
get out of here with this HTTPS is unnecessary tedium.
If we could trust the entire network we wouldn’t need TLS for any site.
Even so, this doesn't actually change much. I've never bought a certificate valid for more than a year. I'm not aware of any major player that sells certificates valid for more than a year. So this rule has existed for a long time in practice, but is only now being codified.
It's PKI for Let's Encrypt certificates. Helps you issue, renew, revoke certs from a central place. Also get alerts so you know when things have changed, expired, failed to renew.
While a lot of places give you certs built in, there's a whole world of places you still need certs. Like FTP, mail, behind load balancers, disparate environments and systems, etc.
In the future, I'm planning on creating a way to automate the certificate exchange process. This should help with using and exchanging certs used in client authentication and things like SAML SSO. If expiration get down to a month or less, I see a need for a system to help do all of these things and more.
I'm still a bit fuzzy on this - why would I want alerting, for example? Automation is a big part of LE, and my certs are configured to auto-renew. If that was to fail for some reason, then LE will send me an email - is it this part where this tool comes in, providing improved alerts where automation has failed?
To elaborate on the why for alerting, there are many situations that I've seen where things change and subsequently fail silently. Perhaps some dependencies, or maybe configuration changes, caused things to break. Also, alerting doesn't only have to be for your certificates. You can point to any endpoint to monitor as well. There are three aspects of alerting: changes to the cert (perhaps you care about a 3rd party certificate and its underlying key changing), failure to renew, and expirations. Each comes with its own benefits and use cases.
To expand on the why a bit further for the project as a whole, it's really as a way to help consolidate and centralize things. I've seen many disparate ways of using Let's Encrypt. From various clients to some hacks to better support more complicated scenarios. By separating obtaining the certificate from applying, it helps facilitate many things, like using LE certs behind load balancers & proxies, non-standard ports, things that don't speak HTTP, etc.
If certificate expiration continues to decrease in time, we'll need some capabilities to exchange certificates in an automated fashion as well. I'd also like to incorporate Certificate Transparency logs so you can be sure no one has issued certs for your domain(s). There are many cool and interesting scenarios but mostly the challenges come when managing things at scale. So, it's not really all that useful if you're only managing one or two certs.
Small players can easily get certificates manually or automate. The platforms/tools they use often give certificates out of the box (cloudflare, heroku, wordpress, etc...).
Large players can't manage certificates. Developers/sysadmins can't use let's encrypt because it's prohibited by higher up and blocked. Even if they could use it, it's not supported by their older tools and devices. The last large company I worked for had no automation around certificates and the department that handled certificate requests was sabotaging attempts to automate, possibly out of fear of losing their jobs.
I’d say it takes less time than going through a single paid certificate store… Assuming you already have a tool. If you don’t, then maybe it’s the same or 5 minutes more.
Oh yes, there is.
> Your site needs HTTPS.
Not really. Google says "switch to HTTPS or lose ranking":
I disagree with "switch to HTTPS or lose ranking", but that's an HTTP vs. HTTPS issue with Google's search ranking, not about Chromium or Mozilla. This article is about Chromium & Mozilla making stricter rules for HTTPS certificates. That's not a bad thing, to hold HTTPS sites to a better standard.
1) In my experience the user experience even for technical admins is still flakey on at least some popular platforms. In other words, it's not as incredible as you think.
2) It's not available to a host that doesn't connect to the internet but does occasionally get connected to by a local browser (eg. IoT firewalled inside my LAN is one obvious such case; I'm sure there are others).
And most importantly:
3) You'd have to be insane or naive to accept an architecture that leaves you dependant on a single vendor (especially if you need that vendor more than they need you!).
If you can't use HTTP-01 and must use DNS-01 challenge, I would check whether the software that runs your host's DNS management panel has an API in addition to manual mode. If not, I would check for ability to automate HTTP requests to that tool (parse the HTML, submit the forms, basically). My hope would be that the tool is popular and someone already did the work and code exists to operate it as if it had an API.
If you can do that, you can write (or find one already written) a certbot plugin that performs the DNS challenge using your credentials to the host provided DNS settings. certbot has number of plugins for the big hosting providers: https://github.com/certbot/certbot
certbot is the most popular Let's Encrypt client, but it's not the only one. Maybe another client has support for your situation. I would maybe ask the support of your hosting provider, maybe they know something.
If you can’t accept inbound http traffic then you use DNS verification and if you never contact the internet then no public cert could work for you.
Unless they set up LtE for their customers
(And as much as I like LtE I think it's complicate to depend in one issuer only)
People who know how to set up a website on a shared hosting platform probably also know how to renew a LE certificate, I think.
If you want to run a webserver but are unable to set up a cronjob that does
If it's just you and you don't care about your own security, then do whatever you want in your own browser.
It’s shit attitudes like this that killed the old internet we all loved
But if you have external users on your site sending data to your site, you have a responsibility to not treat your users' data as meaningless.