On my system, modifying the trust anchors requires root privileges.
mkcert undoes this boundary by opening a "shortcut" where any code on the machine can mint trusted certificates for any domain (like internet banking).
Why not make the CA private key only readable as root, and the issuance of new leaf certificates a privileged operation?
I would also expect that if/when you add ACME, that the CA key material is locked away in a different user to the one in the DE.
Maybe not every platform you support actually has this boundary, but I would expect that mkcert doesn't remove the boundary on platforms where it does exist.
Fan of your work since your Heartbleed scanner!
My reason for not making it a first class feature (with its complexity cost) is that in the vast majority of cases, an attacker with the ability to read local files can also directly attack the browser. (At least in mkcert’s use case, which is development not deployment.)
Also, I don’t really see how to maintain that boundary once ACME gives certificates to anything that can connect to a local port (without sending headers suggesting it’s a tricked browser).
I feel like this is a similar pitfall to how if you add a user to the `docker` group for convenience, you (perhaps unknowingly) gave that user root access to the host.
With this minor change, mkcert still retains its full function and convenience. Just type your password once in a blue moon when you need a certificate for a new fake real domain.
I can get a valid certificate for development by simply getting a valid certificate for localhost.example.com on my server through let's encrypt and then making localhost.example.com resolve to 127.0.0.1 in my /etc/hosts file.
Some code can behave slightly differently on localhost than on localhost.example.com, for example in deciding whether to keep cookies on a third level domain, so tests are more reliable that way.
Let's say you have an app running on localhost:8000 on your dev box.
If you goto lvh.me:8000, it will work without having to install or modify anything on your dev box or modify your DNS records for another domain. If your app had an "example" subdomain you could even goto example.lvh.me:8000.
I wrote about lvh and other useful free services that help developing apps without needing to install a DNS server or mess with SSL certificates at https://nickjanetakis.com/blog/ngrok-lvhme-nipio-a-trilogy-f....
I wish people would stop using my IP address!
I'm wondering if anyone would be aware of any issues if the private keys where exposed? I'm going to assume if one can MITM my localhost, it's already game over.
So perhaps not do this if your domain is large_bank.com. But if you are aware the keys are exposed you could simply change your DNS to make it resolve somewhere else.
Of course, another option is to use localhost.junk_domain.com for testing so it doesn't matter if you lose the keys.
The slight concern is, I'm concerned that most of the pre-built binaries (the "pre-built binaries link" goes to https://github.com/FiloSottile/mkcert/releases) are not signed. There are only a few releases that are marked Verified. Since this messes with the local trust store, it would be good to have things signed.
The snark is directed at this sentence:
> If you are doing TLS certificates right in production, you are using Let's Encrypt via the ACME protocol.
I agree that Let's Encrypt is great, but I dislike the implication that, if you're not using Let's Encrypt, you're not doing TLS right. Or maybe the text was supposed to be read as…
> If you are doing Let's Encrypt TLS certificates right in production, you are using the ACME protocol.
As for the snark, I stand by my provocation: I believe it’s best practice to automate TLS certificate renewal, and for 95% of deployments that means ACME, which for 95% of cases currently (but not necessarily) means Let’s Encrypt.
From time to time, I encounter projects that the upstream doesn't support HTTPS and all tarballs don't have signatures. There's a high psychological pressure as a packager when one is dealing with projects like this, it's totally impossible to verify the integrity of the project and users may get pwned because of my package. My solution is (1) download the packages from different servers and compare the hash, (2) check the hash from the repository of other Linux distributions, to make sure at least my own network was not subjected to a MITM attack...
Automatic software installation is really a great way to have a backdoors into systems.
Please correct me if I'm wrong, but my impression was that Windows offers several certificate stores on several security levels (AD-domain, machine, user), each one able to separately set ACLs for the certificate container, the individual certificate and the private key.
Just because Linux and OSX doesn't support fine grained permissions, doesn't mean "current desktop OSes" doesn't. In fact the dominant desktop OS out there does this just fine and has for decades.
It's the niche hipster-OSes used by developers and security experts (like Linux and Mac OS) which lacks this fine grained control (and I'm saying this as a Linux user).
No need to say that what the guy asks for doesn't exist, when it clearly does in the OS with the clear majority market-share.
Or... Again... Am I missing something crucial or really obvious here?
macOS has different certificate stores and different keychains too. The OP's tool specifically opts to use the "admin" cert store in the system keychain, not user/user (the default for the `security add-trusted-cert` command).
Honestly if you take Windows users out of the equation, this whole thing could just be a posix shell script, which would also solve the "verified release binaries" issue, IMO.
I believe the point is that any software asking for admin could fiddle with your certificate stores, so there's no sense in asking for a higher standard of integrity from software that tells you it will do so.
From quick testing, in a non-elevated prompt, I can edit my local users trust store, but not the machine wide one. In an elevated prompt, I'm allowed to edit both. Seems like there are different permissions for different stores to me. (I haven't tested if any browser considers the user store, if they don't it's not very relevant for this specific use case. If I remember right, which browser wants certificates presented how was kind of a hassle)
>>You cannot install public ACM certificates directly on your website or application. You must install your certificate by using one of the services integrated with ACM and ACM PCA.
In fact I was surprised to see that various providers offer APIs to deploy paid certs.
The funniest part is that this all happened not when Let's Encrypt was an untried experiment but months into deployment when it was already a huge success.
It was the equivalent of that moment when an elderly relative informs you that they can't see these new fangled computers ever amounting to much.
I’m not against paying for a cert but don’t make it much harder than LetsEncrypt to use it.
A single company managing the security for the entire internet? That doesn't sound great.
Besides: their stack is open and available. If monopoly would ever be a concern in the future, other CAs could spin up using the same software. But that doesn't seem something worth being concerned about right now.
62% of sites use letsencrypt , up from 35% in Nov 2017 . Monopolies are bad, be they Microsoft or Google, and that's a worrying trend.
Hopefully everyone has documented all the systems that auto-generate certs, what certs they request and put that in a manifest so that they can have a contingency plan, in the even that LE is knocked offline or shut down for whatever reason. It can certainly happen.
Chrome (in the post's screenshots) follows this; I'm not sure about other UAs.
But you need to use 127.0.0.1, not `localhost`
You have 127.255.255.255 to play with!
Sometimes being understandable is more important than using the most precise and shortest notation.
CIDR is understandable for people who know basic networking. If someone doesn't understand what 127/8 means (which is totally OK if someone doesn't) I'd argue they shouldn't learn about the topic we discuss (valid HTTPS certs for localhost); they should learn something as basic as CIDR first. CIDR isn't difficult to understand. Basics first. You can't learn to speak English if you don't know how to open and close your mouth...
FWIW for my use case (testing github API calls & webhooks feedback), ngrok with a free account was more reliable, and interacted better with little snitch.
is there anything similar for firefox?
This tool was really helpful, when you just want to test your app and don't care at all about all those security restrictions imposed by the browser, Mkcert helps so thank you a lot for this.
At the beginning it was a bit confusing because some knowledge is assumed about how all this stuff works. For complete strangers like me I ended up asking for help, this includes some exact steps that one can follow to generate and install a certificate, so it might be helpful for someone reading this:
The article links to Let's Encrypt post (https://letsencrypt.org/docs/certificates-for-localhost/) where they mention that "modern browsers consider “http://127.0.0.1:8000/" to be a “potentially trustworthy”". Before that several years ago it was just impossible to communicate from https to local http. Now most browsers (excluding Safari) support communicating with local http from https pages. And there is a hostile opposition from a Safari developer to support this, even though it is now a standard. (https://bugs.webkit.org/show_bug.cgi?id=171934)
Distributing an unconstrained CA root certificate and asking users to install it is a terrible idea. Even generating one locally is dangerous if private key is easily exportable. If one still needs to support local https, it's better to limit the scope as much as possible. E.g. to issue a certificate only to sub-sub domain and apply name constraints as @tedunangst mentioned. It's easy to do this with XCA tool:
* XCA: https://github.com/chris2511/xca/
* Name Constraint Wiki: https://wiki.mozilla.org/CA:NameConstraints
* How to create CA with NC using XCA: https://security.stackexchange.com/questions/31376/can-i-res...
* An example how valid/invalid certificate look on Windows https://github.com/DataSpreads/CA
(Name constraints are also not supported on MacOS currently)
Definitely a great product if it works as advertised!
This is a great guide on how to set up SSL on localhost https://deliciousbrains.com/ssl-certificate-authority-for-lo...
One could make a similar comment about using cURL vs building your own HTTP request, to learn the basics themselves :)
A convenient solution in case you can control DNS at the local network level is to just get an ordinary certificate for a real domain (for example dev.mydomain.com) and have it resolve to that local IP.
Using your own CA is possible too, of course, but getting it installed on all the devices is a nuisance.
I’d prefer getting a separate certificate for local.domain.tld instead.
Please correct me if I have a misunderstanding.
How do you make it work if you are working offline, say in a plane ?
Re-routing the domain to the local machine (by editing /etc/hosts) should work just the same. A certificate from a trusted CA can be validated without an internet connection, they are part of the OS/Browser distribution.
Adding a new root CA means anyone who gets their hands on the keys (which are openly available to the user of mkcert) gets to completely obliterate your ‘trust’ store and MITM any of your secure connections.
However, for added security you could run mkcert on another computer that is not connected to a network. Then you just copy the root cert and the leaf cert to your dev machine, but leave the root cert’s private key offline.
Also, it seems to me that virtually any way to get those keys involve hostile code running on your machine, meaning you're already toast.
I do also wish it didn't install certs as non-root, but as pointed out in another comment, there is a workaround for that.
I wonder if browsers will allow this to work for localhost?
Here's the script I used to do this last time, just a couple of openssl commands: https://gist.github.com/xPaw/840f378f3fed64806b46211c2287b52...
For example, I implement google oauth flow recently and they have a flow of redirect URL you have to pre-defined. To simplify the code and config, I want to make HTTPS by default and the only option instead of checking for the env(dev/prod to enable https or not). This simplify the code and still allow me to test https with it.
My solution was to use AWS and route 443 to 80 in a load balancer. That way there IS NO MIXED content: it is all HTTPS. (And certificates are free on AWS!)
If you don't like AWS, can't you just nginx as your load balancer and do the same thing?
Why should a website care about what port traffic is coming in on, shouldn't the TLS happen IN FRONT of your website so the website can be agnostic?
So I feel like the following workflow is simpler no?
1. Use something like local.mydomain.com as your local dev domain. (set the DNS in Cloudflare / Netlify etc. to 127.0.0.1)
2. Use Let's encrypt to generate certs for that domain.
Am I going about this the wrong way? (or is there something super insecure that I've missed?)
For the dns part, I honestly think a hosts file entry is more flexible, as you can support environments using vms/containers etc with a guest that has a dhcp address.
This is way simpler in that case!
From there it's simple to host the front-end app I work on using a node.js script.
I guess I am not the only one having around a `genssl.sh` that grew big during the years, using openssl, generating CA, certreq, then signing and setting properly `subjectAltName` etc :-D
I like mkcert since it also injects the root CA into the trust store - was always too lazy to do this programmatically.
Also note that this doesn't have intermediate CA, root CA signs everything. And of course: NEVER use this even close to production. (as you can see, even I only used in my sandbox env)
sudo apt-get -yy install --no-install-recommends certbot
mkdir -p le
certbot --work-dir ./le --logs-dir le --config-dir le -d localhost.mydomain.com --manual --preferred-challenges dns certonly
Adding reverse proxies for different local services becomes a piece of cake, but mkcert allows you to use `localhost` directly if needed. Personally I never have and like not having to make any changes to the trust store.
192-168-86-30.localresolve.com has A record 192.168.86.30
10-10-10-1.localresolve.com has A record 10.10.10.1
http://lvh.me is a free service that resolves itself along with all subdomains to localhost.
Edit: http://nip.io/ looks like it does exactly what you asked.